ln-1000-pipeline-orchestrator by levnikolaevich/claude-code-skills
npx skills add https://github.com/levnikolaevich/claude-code-skills --skill ln-1000-pipeline-orchestratorPaths: File paths (
shared/,references/,../ln-*) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root. Ifshared/is missing, fetch files via WebFetch fromhttps://raw.githubusercontent.com/levnikolaevich/claude-code-skills/master/skills/{path}.
Type: L1 Orchestrator Category: 1000 Pipeline
通过在一个上下文中以 Skill() 调用的方式调用协调器,驱动选定的 Story 完成整个流水线(任务规划 -> 验证 -> 执行 -> 质量门)。
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
L0: ln-1000-pipeline-orchestrator (sequential Skill calls, single context)
+-- Skill("ln-300") — task decomposition (internally manages its own workers)
+-- Skill("ln-310") — validation (internally launches Codex/Gemini agents)
+-- Skill("ln-400") — execution (internally dispatches Agent(ln-401/403/404), Skill(ln-402))
+-- Skill("ln-500") — quality gate (internally runs ln-510/ln-520, verdict, finalization)
关键原则: ln-1000 通过 Skill 工具调用协调器。每个协调器管理其内部的 worker 调度。ln-1000 不会修改现有技能——它完全像人类操作员那样调用它们。
必须阅读: 加载 shared/references/tools_config_guide.md 和 shared/references/storage_mode_detection.md
提取:task_provider = Task Management -> Provider (linear | file)。
必须阅读: 加载 references/pipeline_states.md 以了解转换规则和守卫。
Backlog --> Stage 0 (ln-300) --> Backlog --> Stage 1 (ln-310) --> Todo
(no tasks) create tasks (tasks exist) validate |
| NO-GO |
v v
[retry/ask] Stage 2 (ln-400)
|
v
To Review
|
v
Stage 3 (ln-500)
| |
PASS FAIL
| v
Done To Rework -> Stage 2
(branch pushed) (max 2 cycles)
| 阶段 | Skill | 输入状态 | 输出状态 |
|---|---|---|---|
| 0 | ln-300-task-coordinator | Backlog (no tasks) | Backlog (tasks created) |
| 1 | ln-310-multi-agent-validator | Backlog (tasks exist) | Todo |
| 2 | ln-400-story-executor | Todo / To Rework | To Review |
| 3 | ln-500-story-quality-gate | To Review | Done / To Rework |
PIPELINE="{skill_repo}/ln-1000-pipeline-orchestrator/scripts/cli.mjs"
recovery = Bash: node $PIPELINE status
IF recovery.active == true:
# 先前运行被中断——从 CLI 状态恢复
1. Extract: story_id, stage, resume_action from recovery JSON
2. Re-read kanban board -> verify story still exists
3. IF recovery.state.worktree_dir exists: cd {recovery.state.worktree_dir}
4. Jump to Phase 4, starting from resume_action
IF recovery.active == false:
# 全新开始——进入阶段 1
必须阅读: 加载 references/kanban_parser.md 以了解解析模式。
自动发现 docs/tasks/kanban_board.md(或通过存储模式操作使用 Linear API)
从目标项目的 CLAUDE.md(非技能仓库)提取项目简介:
project_brief = {
name: <from H1 or first line>,
tech: <from Development Commands / tech references>,
type: <inferred: "CLI", "API", "web app", "library">,
key_rules: <2-3 critical rules>
}
IF not found: project_brief = { name: basename(project_root), tech: "unknown" }
解析所有状态部分:Backlog, Todo, In Progress, To Review, To Rework
提取 Story 列表,包含:ID, 标题, 状态, Epic 名称, 任务存在情况
过滤:跳过处于 Done, Postponed, Canceled 状态的 Story
检测每个 Story 的任务存在情况:
_(tasks not created yet)_ -> 无任务 -> 阶段 0确定每个 Story 的目标阶段(参见 references/pipeline_states.md 阶段到状态映射)
显示可用的 Story 并让用户选择一个:
Project: {project_brief.name} ({project_brief.tech})
Available Stories:
| # | Story | Status | Stage | Skill | Epic |
|---|-------|--------|-------|-------|------|
| 1 | PROJ-42: Auth endpoint | To Review | 3 | ln-500 | Epic: Auth |
| 2 | PROJ-55: CRUD users | Backlog (no tasks) | 0 | ln-300 | Epic: Users |
| 3 | PROJ-60: Dashboard | Todo | 2 | ln-400 | Epic: UI |
AskUserQuestion: "Which story to process? Enter # or Story ID."
存储选定的 story。仅为选定的 story 提取 story 简介:
description = get_issue(selected_story.id).description
story_briefs[id] = parse <!-- ORCHESTRATOR_BRIEF_START/END --> markers
IF no markers: story_briefs[id] = { tech: project_brief.tech, keyFiles: "unknown" }
project_brief.tech 生态系统)project_brief.key_rules跳过阶段 2 如果未找到业务问题。直接进入阶段 3。
IF storage_mode == "linear":
statuses = list_issue_statuses(teamId=team_id)
status_cache = {status.name: status.id FOR status IN statuses}
REQUIRED = ["Backlog", "Todo", "In Progress", "To Review", "To Rework", "Done"]
missing = [s for s in REQUIRED if s not in status_cache]
IF missing: ABORT "Missing Linear statuses: {missing}. Configure workflow."
验证目标项目中的 .claude/settings.local.json:
defaultMode = "bypassPermissions"(协调器生成的 Agent worker 所需)必须阅读: 加载 shared/references/git_worktree_fallback.md
branch_check = git branch --show-current
IF branch_check matches feature/* / optimize/* / upgrade/* / modernize/*:
worktree_dir = CWD
project_root = CWD
branch = branch_check
ELSE:
story_slug = slugify(selected_story.title)
branch = "feature/{selected_story.id}-{story_slug}"
worktree_dir = ".hex-skills/worktrees/story-{selected_story.id}"
project_root = CWD
changes = git diff HEAD
IF changes not empty:
git diff HEAD > .hex-skills/pipeline/carry-changes.patch
git fetch origin
git worktree add -b {branch} {worktree_dir} origin/master
IF .hex-skills/pipeline/carry-changes.patch exists:
git -C {worktree_dir} apply .hex-skills/pipeline/carry-changes.patch && rm .hex-skills/pipeline/carry-changes.patch
IF apply fails: WARN user "Patch conflicts -- continuing without uncommitted changes"
cd {worktree_dir} # All subsequent Skill calls inherit this CWD
协调器在启动时自动检测 feature/* -> 跳过它们自己的工作树创建(ln-400 阶段 1 步骤 5)。
Bash: node $PIPELINE start \
--story {selected_story.id} \
--title "{selected_story.title}" \
--storage {storage_mode} \
--project-brief '{JSON.stringify(project_brief)}' \
--story-briefs '{JSON.stringify(story_briefs)}' \
--business-answers '{JSON.stringify(business_answers)}' \
--status-cache '{JSON.stringify(status_cache)}' \
--skill-repo-path "{skill_repo}" \
--worktree-dir "{worktree_dir}" \
--branch-name "{branch}"
IF result.recovery == true:
# 发现活动运行——恢复而非重新开始
Jump to Phase 4 using result.state
IF platform == "win32":
Bash: cp {skill_repo}/ln-1000-pipeline-orchestrator/references/hooks/prevent-sleep.ps1 .hex-skills/pipeline/prevent-sleep.ps1
Bash: powershell -ExecutionPolicy Bypass -WindowStyle Hidden -File .hex-skills/pipeline/prevent-sleep.ps1 &
sleep_prevention_pid = $!
必须阅读: 加载 references/phases/phase4_flow.md 以了解 ASSERT 守卫、阶段说明、上下文恢复和错误处理。必须阅读: 加载 references/checkpoint_format.md 以了解检查点模式。
# --- INITIALIZATION ---
id = selected_story.id
target_stage = determine_stage(selected_story) # pipeline_states.md / guards.mjs
# --- PROGRESS TRACKER (survives compaction) ---
TodoWrite([
{content: "Stage 0: Task Decomposition (ln-300)", status: "pending", activeForm: "Decomposing tasks"},
{content: "Stage 1: Validation (ln-310)", status: "pending", activeForm: "Validating story"},
{content: "Stage 2: Execution (ln-400)", status: "pending", activeForm: "Executing tasks"},
{content: "Stage 3: Quality Gate (ln-500)", status: "pending", activeForm: "Running quality gate"},
{content: "Pipeline Report + Cleanup", status: "pending", activeForm: "Generating report"}
])
# --- STAGE 0: Task Decomposition ---
IF target_stage <= 0:
Bash: node $PIPELINE advance --story {id} --to STAGE_0
Skill(skill: "ln-300-task-coordinator", args: "{id}")
Re-read kanban -> ASSERT tasks exist under Story, count IN 1..8
IF ASSERT fails: Bash: node $PIPELINE pause --story {id} --reason "Task creation failed"; ESCALATE
Write stage notes: .hex-skills/pipeline/stage_0_notes_{id}.md (Key Decisions, Artifacts)
Bash: node $PIPELINE checkpoint --story {id} --stage 0 --plan-score {score} --tasks-remaining '{JSON tasks}' --last-action "Tasks created"
# --- STAGE 1: Validation ---
IF target_stage <= 1:
Bash: node $PIPELINE advance --story {id} --to STAGE_1
IF advance fails (guard rejection): handle per error.recovery
Skill(skill: "ln-310-multi-agent-validator", args: "{id}")
Re-read kanban -> ASSERT Story status = Todo
Extract readiness_score from ln-310 output
IF NO-GO:
Bash: node $PIPELINE advance --story {id} --to STAGE_1 # retry (guard auto-increments validation_retries)
IF advance fails: Bash: node $PIPELINE pause --story {id} --reason "Validation retry exhausted"; ESCALATE
Skill(skill: "ln-310-multi-agent-validator", args: "{id}") # retry
Re-read kanban -> ASSERT Story status = Todo
IF still NOT Todo: Bash: node $PIPELINE pause --story {id} --reason "Validation failed"; ESCALATE
Extract agents_info from .hex-skills/agent-review/review_history.md or ln-310 output
Write stage notes: .hex-skills/pipeline/stage_1_notes_{id}.md (Verdict, Agent Review, Key Decisions)
Bash: node $PIPELINE checkpoint --story {id} --stage 1 --verdict {verdict} --readiness {score} --agents-info "{agents}" --last-action "Validated"
# --- COMPACTION RECOVERY (replaces old COMPACTION GUARD) ---
# If context compacted and vars lost: Bash: node $PIPELINE status --story {id}
# Extract resume_action from JSON -> continue from there. No manual JSON reads needed.
# --- STAGE 2+3 LOOP (rework cycle, managed by CLI guards) ---
WHILE true:
# STAGE 2: Execution
IF target_stage <= 2 OR (status shows rework cycle):
Bash: node $PIPELINE advance --story {id} --to STAGE_2
IF advance fails: Bash: node $PIPELINE pause --story {id} --reason "{error}"; ESCALATE; BREAK
Skill(skill: "ln-400-story-executor", args: "{id}")
Re-read kanban -> ASSERT Story status = To Review AND all tasks = Done
IF ASSERT fails: Bash: node $PIPELINE pause --story {id} --reason "Stage 2 incomplete"; ESCALATE; BREAK
git_stats = parse `git diff --stat origin/master..HEAD`
Write stage notes: .hex-skills/pipeline/stage_2_notes_{id}.md (Key Decisions, Git commits)
Bash: node $PIPELINE checkpoint --story {id} --stage 2 --tasks-completed '{JSON done}' --git-stats '{JSON stats}' --last-action "Implementation complete"
# STAGE 3: Quality Gate (IMPOSSIBLE TO SKIP — next line after Stage 2)
Bash: node $PIPELINE advance --story {id} --to STAGE_3
Skill(skill: "ln-500-story-quality-gate", args: "{id}")
Re-read kanban -> check Story status
Extract quality verdict, score from ln-500 output
Extract agents_info from .hex-skills/agent-review/review_history.md or ln-500 output
Write stage notes: .hex-skills/pipeline/stage_3_notes_{id}.md (Verdict, Score, Agent Review, Branch)
Bash: node $PIPELINE checkpoint --story {id} --stage 3 --verdict {verdict} --quality-score {score} --agents-info "{agents}" --last-action "Quality gate: {verdict}"
IF Story status = Done:
Bash: node $PIPELINE advance --story {id} --to DONE
BREAK
IF Story status = To Rework:
Bash: node $PIPELINE advance --story {id} --to STAGE_2 # guard auto-increments quality_cycles
IF advance fails (quality_cycles >= 2):
Bash: node $PIPELINE pause --story {id} --reason "Quality gate failed 2 times"
ESCALATE: "Quality gate failed after max cycles. Manual review needed."
BREAK
target_stage = 2 # loop back to Stage 2
CONTINUE
Bash: node $PIPELINE pause --story {id} --reason "Unexpected Stage 3 outcome"
ESCALATE: "Story ended Stage 3 in unexpected status. Manual review needed."
BREAK
### Stop Conditions (Quality Cycle)
| Condition | Action |
|-----------|--------|
| All tasks Done + Story = Done | STOP — Story completed successfully |
| `quality_cycles >= 2` | STOP — ESCALATE: "Quality gate failed after max cycles. Manual review needed." |
| Validation retry fails (NO-GO after retry) | STOP — ESCALATE: ask user for direction |
| Stage 2 precondition fails | STOP — ESCALATE: "Stage 2 incomplete, manual intervention needed" |
### Phase 5: Cleanup & Report
pre_cleanup_status = Bash: node $PIPELINE status --story {id} IF pre_cleanup_status.state.stage != "DONE": Bash: node $PIPELINE advance --story {id} --to DONE
status = Bash: node $PIPELINE status --story {id} final_state = status.state.stage OR "DONE" verification = { story_selected: status.state.story_id == id story_processed: final_state IN ("DONE", "PAUSED") } IF ANY verification == false: WARN user with details
stage_notes = {} FOR N IN 0..3: IF .hex-skills/pipeline/stage_{N}notes{id}.md exists: stage_notes[N] = read file content ELSE: stage_notes[N] = "(no notes captured)"
branch_name = git branch --show-current git_stats_final = git diff --stat origin/master..HEAD (if not already captured)
durations = {N: stage_timestamps.stage_{N}end - stage_timestamps.stage{N}_start FOR N IN 0..3 IF both timestamps exist}
Write docs/tasks/reports/pipeline-{date}.md:
Story: {id} -- {title} Branch: {branch_name} Final State: {final_state} Duration: {now() - pipeline_start_time}
| Tasks | Plan Score | Duration |
|---|---|---|
| {N} created | {score}/4 | {durations[0]} |
{stage_notes[0]}
| Verdict | Readiness | Agent Review | Duration |
|---|---|---|---|
| {verdict} | {score}/10 | {agents_info} | {durations[1]} |
{stage_notes[1]}
| Status | Files | Lines | Duration |
|---|---|---|---|
| {result} | {files_changed} | +{added}/-{deleted} | {durations[2]} |
{stage_notes[2]}
| Verdict | Score | Agent Review | Rework | Duration |
|---|---|---|---|---|
| {verdict} | {score}/100 | {agents_info} | {quality_cycles} | {durations[3]} |
{stage_notes[3]}
| Wall-clock | Rework cycles | Validation retries |
|---|---|---|
| {total_duration} | {quality_cycles} | {validation_retries} |
Pipeline Complete:
| Story | Branch | Planning | Validation | Implementation | Quality Gate | State |
|---|---|---|---|---|---|---|
| {id} | {branch} | {stage0} | {stage1} | {stage2} | {stage3} | {final_state} |
Report saved: docs/tasks/reports/pipeline-{date}.md
cd {project_root} IF final_state == "PAUSED" AND worktree_dir exists AND worktree_dir != project_root: git -C {worktree_dir} add -A git -C {worktree_dir} commit -m "WIP: {id} pipeline paused" --allow-empty git -C {worktree_dir} push -u origin {branch} git worktree remove {worktree_dir} --force Display: "Partial work saved to branch {branch} (remote). Worktree cleaned." IF final_state == "DONE" AND worktree_dir exists AND worktree_dir != project_root:
git worktree remove {worktree_dir} --force
IF sleep_prevention_pid: kill $sleep_prevention_pid 2>/dev/null || true
Delete .hex-skills/pipeline/ directory
shared/references/kanban_update_algorithm.md 以进行 Epic 分组和缩进| 情况 | 检测 | 操作 |
|---|---|---|
| ln-300 任务创建失败 | Skill 返回错误 | 向用户升级:"Cannot create tasks for Story {id}" |
| ln-310 NO-GO (Score <5) | Re-read kanban, status != Todo | 重试一次。如果仍然 NO-GO -> 询问用户 |
| Task in To Rework 3+ times | ln-400 reports rework loop | Escalate: "Task X reworked 3 times, need input" |
| ln-500 FAIL | Re-read kanban, status = To Rework | Fix tasks auto-created by ln-500. Stage 2 re-entry. Max 2 quality cycles |
| Skill call error | Exception from Skill() | node $PIPELINE status -> re-invoke same Skill (kanban handles task-level resume) |
| Context compression | PostCompact hook or manual detection | node $PIPELINE status -> extract resume_action -> continue |
| 阶段 | Skill | 调用方式 |
|---|---|---|
| 0 | ln-300-task-coordinator | Skill(skill: "ln-300-task-coordinator", args: "{id}") |
| 1 | ln-310-multi-agent-validator | Skill(skill: "ln-310-multi-agent-validator", args: "{id}") |
| 2 | ln-400-story-executor | Skill(skill: "ln-400-story-executor", args: "{id}") |
| 3 | ln-500-story-quality-gate | Skill(skill: "ln-500-story-quality-gate", args: "{id}") |
TodoWrite 格式(必须):
{content: "Stage N: {name} (ln-NNN)", status: "pending", activeForm: "{verb}ing"}
.hex-skills/pipeline/stage_N_notes_{id}.md 用于流水线报告node $PIPELINE checkpoint 写入 .hex-skills/pipeline/checkpoint-{id}.json| 症状 | 可能原因 | 自我恢复 |
|---|---|---|
| 主导在长时间运行后输出通用文本 | 上下文压缩破坏了状态变量 | node $PIPELINE status -> extract resume_action -> continue from there |
| ln-400 卡在同一个任务上 | 任务处于返工循环中 | ln-400 内部处理;在 3 次返工后升级问题 |
在计划模式下调用时,显示可用的 Story 并询问用户为哪个 Story 制定计划:
skill_repo_path -- 技能仓库根目录的绝对路径计划输出格式:
BEFORE EXECUTING -- MANDATORY READ: Load
{skill_repo_path}/ln-1000-pipeline-orchestrator/SKILL.md(full file). After reading SKILL.md, start from Phase 3 (Pipeline Setup) using the context below.
Story: {ID}: {Title} Current Status: {status} Target Stage: {N} ({skill_name}) Storage Mode: {file|linear} Project Brief: {name} ({tech}) Business Answers: {answers from Phase 2, or "none"} Skill Repo Path: {skill_repo_path}
state.story_id 已设置)state.stage IN ("DONE", "PAUSED"))docs/tasks/reports/)必须阅读: 加载 shared/references/meta_analysis_protocol.md 和 references/phases/phase6_meta_analysis.md
技能类型:execution-orchestrator。在阶段 5 之后运行。流水线特定实现(恢复映射、趋势跟踪、假设审计、报告格式)在 phase6_meta_analysis.md 中。
references/phases/phase4_flow.md (ASSERT guards, stage notes, context recovery, error handling)references/phases/phase6_meta_analysis.md (Recovery map, trend tracking, report format)shared/references/git_worktree_fallback.mdshared/references/research_tool_fallback.mdreferences/pipeline_states.mdreferences/checkpoint_format.mdreferences/kanban_parser.mdshared/references/kanban_update_algorithm.mdreferences/settings_template.jsonreferences/hooks/prevent-sleep.ps1shared/references/tools_config_guide.mdshared/references/storage_mode_detection.mdshared/references/auto_discovery_pattern.md../ln-300-task-coordinator/SKILL.md../ln-310-multi-agent-validator/SKILL.md../ln-400-story-executor/SKILL.md../ln-500-story-quality-gate/SKILL.mdVersion: 3.0.0 Last Updated: 2026-03-19
Weekly Installs
140
Repository
GitHub Stars
253
First Seen
Feb 18, 2026
Security Audits
Installed on
cursor131
gemini-cli130
codex130
github-copilot130
opencode130
amp128
Paths: File paths (
shared/,references/,../ln-*) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root. Ifshared/is missing, fetch files via WebFetch fromhttps://raw.githubusercontent.com/levnikolaevich/claude-code-skills/master/skills/{path}.
Type: L1 Orchestrator Category: 1000 Pipeline
Drives a selected Story through the full pipeline (task planning -> validation -> execution -> quality gate) by invoking coordinators as Skill() calls in a single context.
L0: ln-1000-pipeline-orchestrator (sequential Skill calls, single context)
+-- Skill("ln-300") — task decomposition (internally manages its own workers)
+-- Skill("ln-310") — validation (internally launches Codex/Gemini agents)
+-- Skill("ln-400") — execution (internally dispatches Agent(ln-401/403/404), Skill(ln-402))
+-- Skill("ln-500") — quality gate (internally runs ln-510/ln-520, verdict, finalization)
Key principle: ln-1000 invokes coordinators via Skill tool. Each coordinator manages its own internal worker dispatch. ln-1000 does NOT modify existing skills — it calls them exactly as a human operator would.
MANDATORY READ: Load shared/references/tools_config_guide.md and shared/references/storage_mode_detection.md
Extract: task_provider = Task Management -> Provider (linear | file).
MANDATORY READ: Load references/pipeline_states.md for transition rules and guards.
Backlog --> Stage 0 (ln-300) --> Backlog --> Stage 1 (ln-310) --> Todo
(no tasks) create tasks (tasks exist) validate |
| NO-GO |
v v
[retry/ask] Stage 2 (ln-400)
|
v
To Review
|
v
Stage 3 (ln-500)
| |
PASS FAIL
| v
Done To Rework -> Stage 2
(branch pushed) (max 2 cycles)
| Stage | Skill | Input Status | Output Status |
|---|---|---|---|
| 0 | ln-300-task-coordinator | Backlog (no tasks) | Backlog (tasks created) |
| 1 | ln-310-multi-agent-validator | Backlog (tasks exist) | Todo |
| 2 | ln-400-story-executor | Todo / To Rework | To Review |
| 3 | ln-500-story-quality-gate | To Review | Done / To Rework |
PIPELINE="{skill_repo}/ln-1000-pipeline-orchestrator/scripts/cli.mjs"
recovery = Bash: node $PIPELINE status
IF recovery.active == true:
# Previous run interrupted — resume from CLI state
1. Extract: story_id, stage, resume_action from recovery JSON
2. Re-read kanban board -> verify story still exists
3. IF recovery.state.worktree_dir exists: cd {recovery.state.worktree_dir}
4. Jump to Phase 4, starting from resume_action
IF recovery.active == false:
# Fresh start — proceed to Phase 1
MANDATORY READ: Load references/kanban_parser.md for parsing patterns.
Auto-discover docs/tasks/kanban_board.md (or Linear API via storage mode operations)
Extract project brief from target project's CLAUDE.md (NOT skills repo):
project_brief = {
name: <from H1 or first line>,
tech: <from Development Commands / tech references>,
type: <inferred: "CLI", "API", "web app", "library">,
key_rules: <2-3 critical rules>
}
IF not found: project_brief = { name: basename(project_root), tech: "unknown" }
Parse all status sections: Backlog, Todo, In Progress, To Review, To Rework
Extract Story list with: ID, title, status, Epic name, task presence
Filter: skip Stories in Done, Postponed, Canceled
Detect task presence per Story:
_(tasks not created yet)_ -> no tasks -> Stage 0Determine target stage per Story (see references/pipeline_states.md Stage-to-Status Mapping)
project_brief.tech ecosystem)project_brief.key_rulesSkip Phase 2 if no business questions found. Proceed directly to Phase 3.
IF storage_mode == "linear":
statuses = list_issue_statuses(teamId=team_id)
status_cache = {status.name: status.id FOR status IN statuses}
REQUIRED = ["Backlog", "Todo", "In Progress", "To Review", "To Rework", "Done"]
missing = [s for s in REQUIRED if s not in status_cache]
IF missing: ABORT "Missing Linear statuses: {missing}. Configure workflow."
Verify .claude/settings.local.json in target project:
defaultMode = "bypassPermissions" (required for Agent workers spawned by coordinators)MANDATORY READ: Load shared/references/git_worktree_fallback.md
branch_check = git branch --show-current
IF branch_check matches feature/* / optimize/* / upgrade/* / modernize/*:
worktree_dir = CWD
project_root = CWD
branch = branch_check
ELSE:
story_slug = slugify(selected_story.title)
branch = "feature/{selected_story.id}-{story_slug}"
worktree_dir = ".hex-skills/worktrees/story-{selected_story.id}"
project_root = CWD
changes = git diff HEAD
IF changes not empty:
git diff HEAD > .hex-skills/pipeline/carry-changes.patch
git fetch origin
git worktree add -b {branch} {worktree_dir} origin/master
IF .hex-skills/pipeline/carry-changes.patch exists:
git -C {worktree_dir} apply .hex-skills/pipeline/carry-changes.patch && rm .hex-skills/pipeline/carry-changes.patch
IF apply fails: WARN user "Patch conflicts -- continuing without uncommitted changes"
cd {worktree_dir} # All subsequent Skill calls inherit this CWD
Coordinators self-detect feature/* on startup -> skip their own worktree creation (ln-400 Phase 1 step 5).
Bash: node $PIPELINE start \
--story {selected_story.id} \
--title "{selected_story.title}" \
--storage {storage_mode} \
--project-brief '{JSON.stringify(project_brief)}' \
--story-briefs '{JSON.stringify(story_briefs)}' \
--business-answers '{JSON.stringify(business_answers)}' \
--status-cache '{JSON.stringify(status_cache)}' \
--skill-repo-path "{skill_repo}" \
--worktree-dir "{worktree_dir}" \
--branch-name "{branch}"
IF result.recovery == true:
# Active run found — resume instead of fresh start
Jump to Phase 4 using result.state
IF platform == "win32":
Bash: cp {skill_repo}/ln-1000-pipeline-orchestrator/references/hooks/prevent-sleep.ps1 .hex-skills/pipeline/prevent-sleep.ps1
Bash: powershell -ExecutionPolicy Bypass -WindowStyle Hidden -File .hex-skills/pipeline/prevent-sleep.ps1 &
sleep_prevention_pid = $!
MANDATORY READ: Load references/phases/phase4_flow.md for ASSERT guards, stage notes, context recovery, and error handling. MANDATORY READ: Load references/checkpoint_format.md for checkpoint schema.
# --- INITIALIZATION ---
id = selected_story.id
target_stage = determine_stage(selected_story) # pipeline_states.md / guards.mjs
# --- PROGRESS TRACKER (survives compaction) ---
TodoWrite([
{content: "Stage 0: Task Decomposition (ln-300)", status: "pending", activeForm: "Decomposing tasks"},
{content: "Stage 1: Validation (ln-310)", status: "pending", activeForm: "Validating story"},
{content: "Stage 2: Execution (ln-400)", status: "pending", activeForm: "Executing tasks"},
{content: "Stage 3: Quality Gate (ln-500)", status: "pending", activeForm: "Running quality gate"},
{content: "Pipeline Report + Cleanup", status: "pending", activeForm: "Generating report"}
])
# --- STAGE 0: Task Decomposition ---
IF target_stage <= 0:
Bash: node $PIPELINE advance --story {id} --to STAGE_0
Skill(skill: "ln-300-task-coordinator", args: "{id}")
Re-read kanban -> ASSERT tasks exist under Story, count IN 1..8
IF ASSERT fails: Bash: node $PIPELINE pause --story {id} --reason "Task creation failed"; ESCALATE
Write stage notes: .hex-skills/pipeline/stage_0_notes_{id}.md (Key Decisions, Artifacts)
Bash: node $PIPELINE checkpoint --story {id} --stage 0 --plan-score {score} --tasks-remaining '{JSON tasks}' --last-action "Tasks created"
# --- STAGE 1: Validation ---
IF target_stage <= 1:
Bash: node $PIPELINE advance --story {id} --to STAGE_1
IF advance fails (guard rejection): handle per error.recovery
Skill(skill: "ln-310-multi-agent-validator", args: "{id}")
Re-read kanban -> ASSERT Story status = Todo
Extract readiness_score from ln-310 output
IF NO-GO:
Bash: node $PIPELINE advance --story {id} --to STAGE_1 # retry (guard auto-increments validation_retries)
IF advance fails: Bash: node $PIPELINE pause --story {id} --reason "Validation retry exhausted"; ESCALATE
Skill(skill: "ln-310-multi-agent-validator", args: "{id}") # retry
Re-read kanban -> ASSERT Story status = Todo
IF still NOT Todo: Bash: node $PIPELINE pause --story {id} --reason "Validation failed"; ESCALATE
Extract agents_info from .hex-skills/agent-review/review_history.md or ln-310 output
Write stage notes: .hex-skills/pipeline/stage_1_notes_{id}.md (Verdict, Agent Review, Key Decisions)
Bash: node $PIPELINE checkpoint --story {id} --stage 1 --verdict {verdict} --readiness {score} --agents-info "{agents}" --last-action "Validated"
# --- COMPACTION RECOVERY (replaces old COMPACTION GUARD) ---
# If context compacted and vars lost: Bash: node $PIPELINE status --story {id}
# Extract resume_action from JSON -> continue from there. No manual JSON reads needed.
# --- STAGE 2+3 LOOP (rework cycle, managed by CLI guards) ---
WHILE true:
# STAGE 2: Execution
IF target_stage <= 2 OR (status shows rework cycle):
Bash: node $PIPELINE advance --story {id} --to STAGE_2
IF advance fails: Bash: node $PIPELINE pause --story {id} --reason "{error}"; ESCALATE; BREAK
Skill(skill: "ln-400-story-executor", args: "{id}")
Re-read kanban -> ASSERT Story status = To Review AND all tasks = Done
IF ASSERT fails: Bash: node $PIPELINE pause --story {id} --reason "Stage 2 incomplete"; ESCALATE; BREAK
git_stats = parse `git diff --stat origin/master..HEAD`
Write stage notes: .hex-skills/pipeline/stage_2_notes_{id}.md (Key Decisions, Git commits)
Bash: node $PIPELINE checkpoint --story {id} --stage 2 --tasks-completed '{JSON done}' --git-stats '{JSON stats}' --last-action "Implementation complete"
# STAGE 3: Quality Gate (IMPOSSIBLE TO SKIP — next line after Stage 2)
Bash: node $PIPELINE advance --story {id} --to STAGE_3
Skill(skill: "ln-500-story-quality-gate", args: "{id}")
Re-read kanban -> check Story status
Extract quality verdict, score from ln-500 output
Extract agents_info from .hex-skills/agent-review/review_history.md or ln-500 output
Write stage notes: .hex-skills/pipeline/stage_3_notes_{id}.md (Verdict, Score, Agent Review, Branch)
Bash: node $PIPELINE checkpoint --story {id} --stage 3 --verdict {verdict} --quality-score {score} --agents-info "{agents}" --last-action "Quality gate: {verdict}"
IF Story status = Done:
Bash: node $PIPELINE advance --story {id} --to DONE
BREAK
IF Story status = To Rework:
Bash: node $PIPELINE advance --story {id} --to STAGE_2 # guard auto-increments quality_cycles
IF advance fails (quality_cycles >= 2):
Bash: node $PIPELINE pause --story {id} --reason "Quality gate failed 2 times"
ESCALATE: "Quality gate failed after max cycles. Manual review needed."
BREAK
target_stage = 2 # loop back to Stage 2
CONTINUE
Bash: node $PIPELINE pause --story {id} --reason "Unexpected Stage 3 outcome"
ESCALATE: "Story ended Stage 3 in unexpected status. Manual review needed."
BREAK
### Stop Conditions (Quality Cycle)
| Condition | Action |
|-----------|--------|
| All tasks Done + Story = Done | STOP — Story completed successfully |
| `quality_cycles >= 2` | STOP — ESCALATE: "Quality gate failed after max cycles. Manual review needed." |
| Validation retry fails (NO-GO after retry) | STOP — ESCALATE: ask user for direction |
| Stage 2 precondition fails | STOP — ESCALATE: "Stage 2 incomplete, manual intervention needed" |
### Phase 5: Cleanup & Report
pre_cleanup_status = Bash: node $PIPELINE status --story {id} IF pre_cleanup_status.state.stage != "DONE": Bash: node $PIPELINE advance --story {id} --to DONE
status = Bash: node $PIPELINE status --story {id} final_state = status.state.stage OR "DONE" verification = { story_selected: status.state.story_id == id story_processed: final_state IN ("DONE", "PAUSED") } IF ANY verification == false: WARN user with details
stage_notes = {} FOR N IN 0..3: IF .hex-skills/pipeline/stage_{N}notes{id}.md exists: stage_notes[N] = read file content ELSE: stage_notes[N] = "(no notes captured)"
branch_name = git branch --show-current git_stats_final = git diff --stat origin/master..HEAD (if not already captured)
durations = {N: stage_timestamps.stage_{N}end - stage_timestamps.stage{N}_start FOR N IN 0..3 IF both timestamps exist}
Write docs/tasks/reports/pipeline-{date}.md:
Story: {id} -- {title} Branch: {branch_name} Final State: {final_state} Duration: {now() - pipeline_start_time}
| Tasks | Plan Score | Duration |
|---|---|---|
| {N} created | {score}/4 | {durations[0]} |
{stage_notes[0]}
| Verdict | Readiness | Agent Review | Duration |
|---|---|---|---|
| {verdict} | {score}/10 | {agents_info} | {durations[1]} |
{stage_notes[1]}
| Status | Files | Lines | Duration |
|---|---|---|---|
| {result} | {files_changed} | +{added}/-{deleted} | {durations[2]} |
{stage_notes[2]}
| Verdict | Score | Agent Review | Rework | Duration |
|---|---|---|---|---|
| {verdict} | {score}/100 | {agents_info} | {quality_cycles} | {durations[3]} |
{stage_notes[3]}
| Wall-clock | Rework cycles | Validation retries |
|---|---|---|
| {total_duration} | {quality_cycles} | {validation_retries} |
Pipeline Complete:
| Story | Branch | Planning | Validation | Implementation | Quality Gate | State |
|---|---|---|---|---|---|---|
| {id} | {branch} | {stage0} | {stage1} | {stage2} | {stage3} | {final_state} |
Report saved: docs/tasks/reports/pipeline-{date}.md
cd {project_root} IF final_state == "PAUSED" AND worktree_dir exists AND worktree_dir != project_root: git -C {worktree_dir} add -A git -C {worktree_dir} commit -m "WIP: {id} pipeline paused" --allow-empty git -C {worktree_dir} push -u origin {branch} git worktree remove {worktree_dir} --force Display: "Partial work saved to branch {branch} (remote). Worktree cleaned." IF final_state == "DONE" AND worktree_dir exists AND worktree_dir != project_root:
git worktree remove {worktree_dir} --force
IF sleep_prevention_pid: kill $sleep_prevention_pid 2>/dev/null || true
Delete .hex-skills/pipeline/ directory
## Kanban as Single Source of Truth
- **Re-read board** after each stage completion for fresh state. Never cache
- Coordinators (ln-300/310/400/500) update Linear/kanban via their own logic. Lead re-reads and ASSERTs expected state transitions
- **Update algorithm:** Follow `shared/references/kanban_update_algorithm.md` for Epic grouping and indentation
## Error Handling
| Situation | Detection | Action |
|-----------|----------|--------|
| ln-300 task creation fails | Skill returns error | Escalate to user: "Cannot create tasks for Story {id}" |
| ln-310 NO-GO (Score <5) | Re-read kanban, status != Todo | Retry once. If still NO-GO -> ask user |
| Task in To Rework 3+ times | ln-400 reports rework loop | Escalate: "Task X reworked 3 times, need input" |
| ln-500 FAIL | Re-read kanban, status = To Rework | Fix tasks auto-created by ln-500. Stage 2 re-entry. Max 2 quality cycles |
| Skill call error | Exception from Skill() | `node $PIPELINE status` -> re-invoke same Skill (kanban handles task-level resume) |
| Context compression | PostCompact hook or manual detection | `node $PIPELINE status` -> extract resume_action -> continue |
## Worker Invocation (MANDATORY)
| Stage | Skill | Invocation |
|-------|-------|------------|
| 0 | ln-300-task-coordinator | `Skill(skill: "ln-300-task-coordinator", args: "{id}")` |
| 1 | ln-310-multi-agent-validator | `Skill(skill: "ln-310-multi-agent-validator", args: "{id}")` |
| 2 | ln-400-story-executor | `Skill(skill: "ln-400-story-executor", args: "{id}")` |
| 3 | ln-500-story-quality-gate | `Skill(skill: "ln-500-story-quality-gate", args: "{id}")` |
TodoWrite format (mandatory):
{content: "Stage N: {name} (ln-NNN)", status: "pending", activeForm: "{verb}ing"}
## Critical Rules
1. **Single Story processing.** User selects which Story to process
2. **Coordinators via Skill.** Lead invokes ln-300/ln-310/ln-400/ln-500 via Skill tool. Each coordinator manages its own internal worker dispatch (Agent/Skill)
3. **Skills as-is.** Never modify or bypass existing skill logic
4. **Kanban verification.** After EVERY Skill call, re-read kanban and ASSERT expected state. Lead never caches kanban state
5. **Quality cycle limit.** Max 2 quality FAILs per Story (original + 1 rework). After 2nd FAIL, escalate to user
6. **Worktree lifecycle.** ln-1000 creates worktree in Phase 3.4. Branch finalization (commit, push) by ln-500. Worktree cleanup by ln-1000 in Phase 5 (lead is in worktree, so ln-500 skips cleanup)
7. **Stage notes.** Lead writes `.hex-skills/pipeline/stage_N_notes_{id}.md` after each stage for Pipeline Report
8. **Checkpoints.** CLI scripts write `.hex-skills/pipeline/checkpoint-{id}.json` via `node $PIPELINE checkpoint` after each stage
## Known Issues
| Symptom | Likely Cause | Self-Recovery |
|---------|-------------|---------------|
| Lead outputs generic text after long run | Context compression destroyed state vars | `node $PIPELINE status` -> extract resume_action -> continue from there |
| ln-400 stuck on same task | Task in rework loop | ln-400 handles internally; escalates after 3 reworks |
## Anti-Patterns
- Skipping quality gate after execution (Stage 3 is the next line after Stage 2 -- impossible to skip)
- Caching kanban state instead of re-reading after each Skill call
- Running mypy/ruff/pytest directly instead of letting coordinators handle it
- Processing multiple stories without user selection
- Creating worktrees outside Phase 3.4 (coordinators self-detect feature/*)
- Modifying coordinator internal dispatch (ln-400's Agent/Skill pattern is correct as-is)
## Plan Mode Support
When invoked in Plan Mode, show available Stories and ask user which one to plan for:
1. Parse kanban board (Phase 1 steps 1-7)
2. Show available Stories table
3. AskUserQuestion: "Which story to plan for? Enter # or Story ID."
4. Execute Phase 2 (pre-flight questions) if business ambiguities found
5. Resolve `skill_repo_path` -- absolute path to skills repo root
6. Show execution plan for selected Story
7. Write plan to plan file (using format below), call ExitPlanMode
**Plan Output Format:**
BEFORE EXECUTING -- MANDATORY READ: Load
{skill_repo_path}/ln-1000-pipeline-orchestrator/SKILL.md(full file). After reading SKILL.md, start from Phase 3 (Pipeline Setup) using the context below.
Story: {ID}: {Title} Current Status: {status} Target Stage: {N} ({skill_name}) Storage Mode: {file|linear} Project Brief: {name} ({tech}) Business Answers: {answers from Phase 2, or "none"} Skill Repo Path: {skill_repo_path}
## Definition of Done (self-verified in Phase 5)
- [ ] User selected Story (`state.story_id` is set)
- [ ] Business questions resolved (stored OR skip)
- [ ] Story processed to terminal state (`state.stage IN ("DONE", "PAUSED")`)
- [ ] Per-stage ASSERT verifications passed (kanban re-read after each stage)
- [ ] Stage notes written for each completed stage
- [ ] Pipeline report generated (file exists at `docs/tasks/reports/`)
- [ ] Pipeline summary shown to user
- [ ] Worktree cleaned up (Phase 5 step 6)
- [ ] Meta-Analysis run (Phase 6)
## Phase 6: Meta-Analysis
**MANDATORY READ:** Load `shared/references/meta_analysis_protocol.md` and `references/phases/phase6_meta_analysis.md`
Skill type: `execution-orchestrator`. Runs after Phase 5. Pipeline-specific implementation (recovery map, trend tracking, assumption audit, report format) in `phase6_meta_analysis.md`.
## Reference Files
### Phase 4-6 Procedures (Progressive Disclosure)
- **Pipeline flow:** `references/phases/phase4_flow.md` (ASSERT guards, stage notes, context recovery, error handling)
- **Meta-analysis:** `references/phases/phase6_meta_analysis.md` (Recovery map, trend tracking, report format)
### Core Infrastructure
- **MANDATORY READ:** `shared/references/git_worktree_fallback.md`
- **MANDATORY READ:** `shared/references/research_tool_fallback.md`
- **Pipeline states:** `references/pipeline_states.md`
- **Checkpoint format:** `references/checkpoint_format.md`
- **Kanban parsing:** `references/kanban_parser.md`
- **Kanban update algorithm:** `shared/references/kanban_update_algorithm.md`
- **Settings template:** `references/settings_template.json`
- **Sleep prevention:** `references/hooks/prevent-sleep.ps1`
- **Tools config:** `shared/references/tools_config_guide.md`
- **Storage mode operations:** `shared/references/storage_mode_detection.md`
- **Auto-discovery patterns:** `shared/references/auto_discovery_pattern.md`
### Delegated Skills
- `../ln-300-task-coordinator/SKILL.md`
- `../ln-310-multi-agent-validator/SKILL.md`
- `../ln-400-story-executor/SKILL.md`
- `../ln-500-story-quality-gate/SKILL.md`
---
**Version:** 3.0.0
**Last Updated:** 2026-03-19
Weekly Installs
140
Repository
GitHub Stars
253
First Seen
Feb 18, 2026
Security Audits
Gen Agent Trust HubWarnSocketWarnSnykWarn
Installed on
cursor131
gemini-cli130
codex130
github-copilot130
opencode130
amp128
Azure Data Explorer (Kusto) 查询技能:KQL数据分析、日志遥测与时间序列处理
133,300 周安装
韩语AI写作检测与校正器humanizer - 基于科学研究的自然语言优化工具
1,100 周安装
AntV 图表可视化技能 - 智能图表选择与数据可视化生成工具
1,100 周安装
Scrapling官方网络爬虫框架 - 自适应解析、绕过Cloudflare、Python爬虫库
1,100 周安装
Theme Factory - 专业字体色彩主题库,一键应用设计风格到演示文稿和作品
1,100 周安装
Base44 CLI 工具 - 创建和管理 Base44 应用项目 | 命令行开发工具
1,100 周安装
assistant-ui:React AI 聊天界面库,可组合原语,支持多运行时
1,100 周安装
Show available Stories and ask user to pick ONE:
Project: {project_brief.name} ({project_brief.tech})
Available Stories:
| # | Story | Status | Stage | Skill | Epic |
|---|-------|--------|-------|-------|------|
| 1 | PROJ-42: Auth endpoint | To Review | 3 | ln-500 | Epic: Auth |
| 2 | PROJ-55: CRUD users | Backlog (no tasks) | 0 | ln-300 | Epic: Users |
| 3 | PROJ-60: Dashboard | Todo | 2 | ln-400 | Epic: UI |
AskUserQuestion: "Which story to process? Enter # or Story ID."
Store selected story. Extract story brief for selected story only:
description = get_issue(selected_story.id).description
story_briefs[id] = parse <!-- ORCHESTRATOR_BRIEF_START/END --> markers
IF no markers: story_briefs[id] = { tech: project_brief.tech, keyFiles: "unknown" }