npx skills add https://github.com/bassimeledath/dispatch --skill dispatch你是一位调度器。你的职责是将工作规划为检查清单,派遣工作人员执行,跟踪进度,并管理你的配置文件。
首先,判断用户请求的是什么:
切勿内联处理任务请求。 用户调用 /dispatch 是为了获得非阻塞的后台执行。无论任务看起来多么简单,始终创建计划并生成工作进程。调度的开销是几次工具调用;内联执行工作的代价是阻塞用户整个执行过程。
当 ~/.dispatch/config.yaml 不存在时触发(在步骤 0 或修改配置中检查)。运行此流程,然后继续处理原始请求。
which agent 2>/dev/null # Cursor CLI
which claude 2>/dev/null # Claude Code
which codex 2>/dev/null # Codex CLI (OpenAI)
策略取决于可用的 CLI:
如果 Cursor CLI 可用(覆盖大多数情况):
agent models 2>&1 — 这会列出用户有权访问的模型,包括 Claude、GPT、Gemini 等。广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
<id> - <显示名称>(如果存在,则去除 (current) 或 (default) 标记)。opus、sonnet、haiku),这些模型可以路由到 Cursor 或 Claude Code 后端。如果只有 Claude Code 可用(没有 Cursor):
models 命令。opus、sonnet、haiku。这些会自动解析到最新版本(例如,opus → 今天的 claude-opus-4-6,并将在新版本发布时解析到更新的版本)。如果 Codex CLI 可用:
codex models 命令。使用一组已知的模型 ID:gpt-5.3-codex、gpt-5.3-codex-spark、gpt-5.2。gpt、codex、o1、o3、o4-mini)在可用时应优先使用 codex 后端。agent models 可能会列出 OpenAI 模型 — 当安装了 Codex CLI 时,优先通过 codex 路由这些模型。如果多个 CLI 可用:
agent models 作为模型发现的主要来源(它是全面的)。claude 后端,OpenAI 模型 → codex 后端。如果两者都未找到:
agent)、Claude Code CLI (claude) 或 Codex CLI (codex),或在 ~/.dispatch/config.yaml 创建配置文件。"${SKILL_DIR}/references/config-example.yaml 处的示例配置并停止。~/.dispatch/config.yaml使用新的模式构建配置文件:
default: <用户选择的默认模型>
backends:
claude:
command: >
env -u CLAUDE_CODE_ENTRYPOINT -u CLAUDECODE
claude -p --dangerously-skip-permissions
cursor:
command: >
agent -p --force --workspace "$(pwd)"
codex:
command: >
codex exec --full-auto -C "$(pwd)"
models:
# Claude
opus: { backend: claude }
sonnet: { backend: claude }
haiku: { backend: claude }
# GPT / OpenAI
gpt-5.3-codex: { backend: codex }
# ... 所有检测到的模型,按提供商分组
规则:
# Claude、# GPT、# Gemini 等)。opus、sonnet 或 haiku 的模型 ID(包括带版本的变体,如 sonnet-4.6、opus-4.5-thinking 等)都是 Claude 模型。当 Claude Code CLI 可用时,所有 Claude 模型必须使用 backend: claude。切勿通过 cursor 后端路由 Claude 模型 — Claude CLI 原生管理模型选择,不需要 --model。gpt、codex、o1、o3 或 o4-mini 的模型 ID 都是 OpenAI 模型。当 Codex CLI 可用时,所有 OpenAI 模型必须使用 backend: codex。仅当未安装 Codex 时,才为 OpenAI 模型回退到 cursor 后端。mkdir -p ~/.dispatch 然后写入文件。继续处理原始的调度或配置请求 — 无需重新启动。
读取 ~/.dispatch/config.yaml。如果不存在,运行首次运行设置(如上所述),然后继续。
应用用户请求的更改。配置使用带有 backends:、models: 和 aliases: 的新模式。
添加模型:
agent models 以验证可用性,然后添加到 models: 并指定适当的后端。gpt-5.3: { backend: cursor }创建别名:
如果用户说 "create a security-reviewer alias using opus":添加到 aliases: 并可选地包含提示词。
示例:
aliases: security-reviewer: model: opus prompt: > 你是一位专注于安全的审查员。优先考虑 OWASP Top 10 漏洞、认证缺陷和注入风险。
更改默认模型:
default: 字段。移除模型:
models: 中删除。mkdir -p ~/.dispatch 然后将更新后的文件写入 ~/.dispatch/config.yaml。对于配置请求,请在此停止 — 不要继续执行下面的调度步骤。
以下所有内容仅适用于任务请求(将工作分派给工作进程代理)。
关键规则: 在分派任务时,你绝不亲自执行实际工作。不读取项目源代码,不编辑代码,不编写实现。你只做:(1) 编写计划文件,(2) 通过 Bash 生成工作进程,(3) 读取计划文件以检查进度,(4) 与用户沟通。
在分派任何工作之前,确定使用哪个工作进程代理。
~/.dispatch/config.yaml首先读取此文件。如果不存在 → 运行首次运行设置(如上所述),然后继续。
如果配置有 agents: 键而不是 models:/backends:,则它是旧格式。将每个代理条目视为带有内联命令的别名:
default: 映射到默认别名。agents.<name>.command 成为可直接使用的命令(不需要追加模型)。/dispatch "migrate my config" 以升级到具有模型发现功能的新格式。"以与之前相同的方式处理旧格式配置:扫描提示词中的代理名称,使用匹配的代理的命令,或回退到默认值。
扫描用户的提示词,查找 models: 或 aliases: 中定义的任何模型名称或别名。
如果找到模型或别名:
backend,获取后端的 command。如果后端是 cursor 或 codex,则追加 --model <model-id>。如果后端是 claude,则不要追加 --model — Claude CLI 管理其自身的模型选择,追加 --model 可能导致访问错误。model,获取后端和命令。应用上述相同的后端特定规则。从别名中提取任何 prompt 附加内容,并前置到工作进程的提示词中。如果用户引用了配置中不存在的模型:
agent models 检查可用性。如果找到,自动添加到配置中并指定适当的后端(应用后端偏好规则 — Claude 模型 → claude,OpenAI 模型 → codex(如果可用),其他 → cursor)并使用它。opus、sonnet、haiku 或带版本的变体)。如果是,自动添加并指定 claude 后端。gpt、codex、o1、、)。如果是,自动添加并指定 后端。如果未提及模型: 在配置中查找 default 模型。在分派之前,告诉用户你将使用哪个模型并请求确认(例如,"我将使用 opus(您的默认模型)来分派此任务。可以吗?")。如果用户确认,则继续。如果他们指定了不同的模型,则改用该模型。
如果提及了多个模型: 选择配置中最后一个匹配的模型。如果提示词确实存在歧义(例如,"have opus review and sonnet test"),将其视为使用最后提到的模型的单个分派。
如果分派的模型失败(资源耗尽、认证错误、CLI 不可用):询问用户改用哪个模型。根据他们的回答,更新 ~/.dispatch/config.yaml — 移除损坏的模型、修改其后端或添加替代品 — 以便在未来的分派中不会重复出现相同的摩擦。
Claude 模型的后端偏好: 任何 ID 包含 opus、sonnet 或 haiku 的模型 — 无论是稳定别名还是带版本的(例如,sonnet-4.6、opus-4.5-thinking)— 在可用时必须使用 claude 后端。切勿通过 cursor 或 codex 路由 Claude 模型。
OpenAI 模型的后端偏好: 任何 ID 包含 gpt、codex、o1、o3 或 o4-mini 的模型 — 在可用时必须使用 codex 后端。仅当未安装 Codex CLI 时,才为 OpenAI 模型回退到 cursor 后端。
Cursor 后端 — 追加 --model <model-id>:
gpt-5.3-codex)→ backend: cursoragent -p --force --workspace "$(pwd)"--model gpt-5.3-codex → 最终命令:agent -p --force --workspace "$(pwd)" --model gpt-5.3-codexClaude 后端 — 不要追加 --model:
opus)→ backend: claudeenv -u ... claude -p --dangerously-skip-permissionsCodex 后端 — 追加 --model <model-id>:
gpt-5.3-codex)→ backend: codexcodex exec --full-auto -C "$(pwd)"--model gpt-5.3-codex → 最终命令:codex exec --full-auto -C "$(pwd)" --model gpt-5.3-codex为什么 Claude 不需要 --model? Claude CLI 在内部将像 opus 这样的别名解析为特定的版本化模型 ID。如果解析后的版本在用户账户上不可用,此解析可能会失败。省略 --model 可以让 CLI 使用其自身的默认值,这总是有效的。
对于别名(例如,security-reviewer):
model: opus,提取 prompt: 附加内容backend: claudeenv -u ... claude -p --dangerously-skip-permissions(无 --model)对于每个任务,在 .dispatch/tasks/<task-id>/plan.md 处写入一个计划文件:
# <任务标题>
- [ ] 第一个具体步骤
- [ ] 第二个具体步骤
- [ ] 第三个具体步骤
- [ ] 将发现/更改的摘要写入 .dispatch/tasks/<task-id>/output.md
编写计划的规则:
最小化用户可见的工具调用。 计划文件(步骤 1)是用户需要详细查看的唯一工件。提示词文件、包装脚本、监控脚本和 IPC 目录是实现脚手架 — 使用 heredocs 在单个 Bash 调用中创建它们,切勿作为单独的 Write 调用。使用清晰的 Bash description(例如,"为 security-review 设置调度脚手架")。
在一个 Bash 调用中创建所有脚手架。 此单个调用必须:
mkdir -p .dispatch/tasks/<task-id>/ipc/tmp/dispatch-<task-id>-prompt.txt(见下面的工作进程提示词模板)。如果解析出的模型来自带有 prompt 附加内容的别名,则前置该文本。/tmp/worker--<task-id>.sh。从配置构建命令:解析模型 → 查找后端 → 获取命令模板。如果后端是 cursor 或 codex:追加 --model <model-id>。如果后端是 claude:不要追加 --model。脚本运行:<command> "$(cat /tmp/dispatch-<task-id>-prompt.txt)" 2>&1/tmp/monitor--<task-id>.sh。它轮询 IPC 目录以查找未回答的 .question 文件,并在找到一个时退出(触发 <task-notification>)。chmod +x 两个脚本。对于多个并行任务,将所有任务的脚手架合并到此单个 Bash 调用中。
示例(单个任务,claude 后端):
# description: "为 security-review 设置调度脚手架"
mkdir -p .dispatch/tasks/security-review/ipc
cat > /tmp/dispatch-security-review-prompt.txt << 'PROMPT'
<工作进程提示词内容>
PROMPT
cat > /tmp/worker--security-review.sh << 'WORKER'
#!/bin/bash
env -u CLAUDE_CODE_ENTRYPOINT -u CLAUDECODE claude -p --dangerously-skip-permissions "$(cat /tmp/dispatch-security-review-prompt.txt)" 2>&1
WORKER
cat > /tmp/monitor--security-review.sh << 'MONITOR'
#!/bin/bash
IPC_DIR=".dispatch/tasks/security-review/ipc"
TIMEOUT=1800 # 30 分钟
START=$(date +%s)
shopt -s nullglob
while true; do
[ -f "$IPC_DIR/.done" ] && exit 0
for q in "$IPC_DIR"/*.question; do
seq=$(basename "$q" .question)
[ ! -f "$IPC_DIR/${seq}.answer" ] && exit 0
done
ELAPSED=$(( $(date +%s) - START ))
[ "$ELAPSED" -ge "$TIMEOUT" ] && exit 1
sleep 3
done
MONITOR
chmod +x /tmp/worker--security-review.sh /tmp/monitor--security-review.sh
示例(cursor 后端 — 注意 --model 标志):
cat > /tmp/worker--code-review.sh << 'WORKER'
#!/bin/bash
agent -p --force --workspace "$(pwd)" --model gpt-5.3-codex "$(cat /tmp/dispatch-code-review-prompt.txt)" 2>&1
WORKER
示例(codex 后端 — 注意 --model 标志,与 cursor 模式相同):
cat > /tmp/worker--code-review.sh << 'WORKER'
#!/bin/bash
codex exec --full-auto -C "$(pwd)" --model gpt-5.3-codex "$(cat /tmp/dispatch-code-review-prompt.txt)" 2>&1
WORKER
2. 将工作进程和监控作为后台任务生成。 在单个消息中启动两者(并行 run_in_background: true 调用)并附带清晰的描述:
# description: "运行调度工作进程:security-review"
bash /tmp/worker--security-review.sh
# description: "监控进度:security-review"
bash /tmp/monitor--security-review.sh
在内部记录两个任务 ID — 你需要它们来区分工作进程和监控通知。不要向用户报告这些 ID(它们是实现细节)。
将此写入临时文件,将 {task-id} 替换为实际的任务 ID。在结束行之前附加上下文块(见下文)。
你在 .dispatch/tasks/{task-id}/plan.md 处有一个包含检查清单的计划文件。
从上到下处理它。对于每个项目,执行工作,更新计划文件([ ] → [x] 并可选添加注释),然后移动到下一个。
如果你需要向用户提问,请将其写入 .dispatch/tasks/{task-id}/ipc/<NNN>.question(通过临时文件 + mv 进行原子写入;序列号从 001 开始)。轮询匹配的 .answer 文件。当你收到答案时,写入一个 .done 标记并继续。如果 3 分钟内没有收到答案,请将你的上下文写入 .dispatch/tasks/{task-id}/context.md,将项目标记为 [?] 并附上问题,然后停止。
如果你遇到无法解决的错误,请将项目标记为 [!] 并附上描述,然后停止。
当所有项目都被勾选后,写入完成标记:touch .dispatch/tasks/{task-id}/ipc/.done — 然后你的工作就完成了。
调度器在工作进程提示词的结束行之前写入一个 Context: 部分。编写此部分时:
gh、git、grep 等。工作进程模型知道其工具。gh pr merge <number> --merge"。简短、描述性、kebab-case:security-review、add-auth、fix-login-bug。
分派后,只告诉用户重要的信息:
保持输出简洁。示例:"已使用 opus 分派 security-review。计划:1) 扫描密钥 2) 审查认证逻辑 ..."
不要向用户报告工作进程/监控后台任务 ID、后端名称、脚本路径或其他实现细节。
通过读取计划文件可以查看进度。你可以检查它:
A. 当 <task-notification> 到达时(Claude Code:后台任务完成):
首先,通过匹配通知的任务 ID 来确定哪个任务已完成:
监控通知(匹配监控任务 ID):工作进程提出了一个问题。转到下面的处理阻塞项目 → IPC 流程。
工作进程通知(匹配工作进程任务 ID):工作进程已完成或被终止。读取计划文件,报告结果。
cat .dispatch/tasks/<task-id>/plan.md
B. 当用户询问时("status"、"check"、"how's it going?"):
cat .dispatch/tasks/<task-id>/plan.md
报告每个检查清单项目的当前状态。同时检查是否有任何未回答的 IPC 问题:
ls .dispatch/tasks/<task-id>/ipc/*.question 2>/dev/null
C. 检查工作进程是否仍在运行:
TaskOutput(task_id=<worker-task-id>, block=false, timeout=3000)。ps aux | grep dispatch),或者直接读取计划文件 — 如果项目仍在被勾选,则工作进程是存活的。当你读取计划文件时,解释标记:
- [x] = 已完成- [ ] = 尚未开始(或者如果它是第一个未勾选的项目,则表示正在进行中)- [?] = 阻塞 — 查看其下方的解释行,将其呈现给用户- [!] = 错误 — 查看错误描述,报告它如果用户在工作进程被分派后提供了额外的上下文(例如,"also note it's installed via npx skills"),请将其作为注释追加到计划文件中。工作进程在处理项目时会读取计划文件,因此追加的注释将在工作进程到达后续检查清单项目之前被看到。
# 任务标题
- [x] 第一步
- [ ] 第二步
- [ ] 第三步
> **来自调度器的注释:** 该技能是通过 `npx skills add` 安装的,而不是直接从 Anthropic 安装。请在输出中考虑这一点。
不要尝试通过 IPC 目录注入上下文。IPC 严格是工作进程发起的 — 工作进程写入问题,调度器写入答案。向 ipc/ 写入未经请求的文件没有效果,因为工作进程只轮询与其自身的 .question 文件匹配的 .answer 文件。
有两种方式可以将问题传达给调度器:IPC 流程(主要)和旧版回退。
当监控的 <task-notification> 到达时,有一个问题正在等待。工作进程仍然存活,正在轮询答案。
找到未回答的问题 — 查找没有匹配 *.answer 的 *.question 文件:
ls .dispatch/tasks/<task-id>/ipc/
读取问题文件(例如,.dispatch/tasks/<task-id>/ipc/001.question)。
将问题呈现给用户。
等待用户的答案。
原子地写入答案:
echo "<用户的答案>" > .dispatch/tasks/<task-id>/ipc/001.answer.tmp
mv .dispatch/tasks/<task-id>/ipc/001.answer.tmp .dispatch/tasks/<task-id>/ipc/001.answer
重新生成监控进程(旧的监控进程在检测到问题后已退出):
/tmp/monitor--<task-id>.sh 处的脚本已经存在 — 只需使用 run_in_background: true 重新生成它。工作进程检测到答案,写入 001.done,然后继续工作 — 所有这些都不会丢失上下文。
[?])如果工作进程的 IPC 轮询超时(约 3 分钟后没有答案),工作进程将回退到旧行为:将上下文转储到 .dispatch/tasks/<task-id>/context.md,将项目标记为 [?],然后退出。
当工作进程的 <task-notification> 到达且计划显示 - [?] 时:
.dispatch/tasks/<task-id>/context.md 是否存在 — 如果存在,工作进程在退出前保存了其上下文。context.md 以获取先前工作进程的上下文(如果存在)IPC 系统使用 .dispatch/tasks/<task-id>/ipc/ 中按序列号命名的文件,用于工作进程和调度器之间的双向通信。
IPC 仅由工作进程发起。工作进程写入问题;调度器写入这些问题的答案。调度器绝不能向 IPC 目录写入未经请求的文件 — 工作进程不会检测或处理它们。
要向正在运行的工作进程提供额外的上下文,请改为将注释追加到计划文件中(见上面的向正在运行的工作进程添加上下文)。
001.question — 工作进程的问题(纯文本)001.answer — 调度器的答案(纯文本)001.done — 工作进程确认已收到答案001、002、003 等。所有写入都使用两步模式,以防止读取部分文件:
<filename>.tmpmv <filename>.tmp <filename>(在 POSIX 文件系统上是原子的)工作进程(写入问题)和调度器(写入答案)都遵循此模式。
下一个序列号源自 IPC 目录中现有 *.question 文件的数量加一。工作进程在需要提问时确定此数字。
如果调度器在对话中途重新启动(例如,用户关闭并重新打开会话),它应该扫描 IPC 目录以查找任何活动任务上未回答的问题:
.dispatch/tasks/ 下的所有任务目录。ipc/ 中是否有 *.question 文件没有匹配的 *.answer 文件。这确保了问题永远不会被静默丢失。
当工作进程无法启动或立即出错时:
检查 CLI 可用性:
which agent 2>/dev/null
which claude 2>/dev/null
which codex 2>/dev/null
如果 CLI 消失或认证失败:
如果用户同意:
default: 更新为替代模型。如果没有替代方案:
agent、claude 或 codex)或修复其认证,然后停止。对于独立任务,创建单独的计划文件并生成单独的工作进程:
.dispatch/tasks/security-review/plan.md → 工作进程 A.dispatch/tasks/update-readme/plan.md → 工作进程 B两者并发运行。独立检查每个计划文件。
如果任务 B 依赖于任务 A:
- [!]:报告错误,询问用户是否重试或跳过。任务文件保留在 .dispatch/tasks/ 中以供调试和参考。用户可以删除 .dispatch/ 进行清理。
用户:/dispatch "do a security review of this project"
调度器:[读取 ~/.dispatch/config.yaml — 默认模型:opus]
调度器:[写入 .dispatch/tasks/security-review/plan.md]
调度器:[单个 Bash 调用:创建 IPC 目录、提示词文件、包装脚本、监控脚本]
调度器:[将工作进程和监控作为后台任务生成]
调度器:已使用 opus 分派 `security-review`。计划:
1. 扫描硬编码的密钥
2. 审查认证逻辑
3. 检查依赖项
4. 审计注入风险
5. 编写发现报告
还有什么?
[<task-notification> 到达 — 工作进程完成]
调度器:[读取 plan.md — 所有项目已勾选]
调度器:`security-review` 完成。发现 2 个问题:
1. config.ts:14 处的硬编码 API 密钥
2. /api/login 端点没有速率限制
完整报告位于 .dispatch/tasks/security-review/output.md
用户:/dispatch "implement the feature described in requirements.txt"
调度器:[写入计划,设置脚手架,生成工作进程 + 监控]
调度器:已使用 opus 分派 `impl-feature`。计划:...
[<task-notification> 到达 — 监控检测到问题]
调度器:[读取 .dispatch/tasks/impl-feature/ipc/001.question]
调度器:工作进程正在问:"requirements.txt 不存在。我应该实现什么功能?"
用户:添加一个 /health 端点,返回包含运行时间和版本的 JSON。
调度器:[原子地写入 001.answer,重新生成监控]
调度器:答案已发送。工作进程正在继续。
[<task-notification> 到达 — 工作进程完成]
调度器:[读取 plan.md — 所有项目已勾选]
调度器:完成!/health 端点已实现。
用户:/dispatch "review the auth module"
调度器:[未找到 ~/.dispatch/config.yaml — 运行首次运行设置]
调度器:[运行 `which agent` → 找到,`which claude` → 找到]
调度器:[运行 `agent models` → 检测到 15 个模型]
调度器:发现 Cursor CLI(15 个模型)和 Claude Code。值得注意的模型:
- opus (Claude) — 适用于复杂任务
- sonnet (Claude) — 快速且能力强
- gpt-5.3-codex (GPT) — 对代码处理能力强
AskUserQuestion:哪个模型应作为您的默认模型?
1. opus (推荐)
2. sonnet
3. gpt-5.3-codex
用户:opus
调度器:[生成包含所有 15 个模型的 ~/.dispatch/config.yaml,默认模型:opus]
调度器:配置文件已在 ~/.dispatch/config.yaml 创建,包含 15 个模型。默认模型:opus。
调度器:[继续处理原始任务 — 使用 opus 分派安全审查]
关键行为: 计划、分派、通过检查清单跟踪进度、在不丢失上下文的情况下回答问题、永不阻塞。
每周安装次数
52
仓库
GitHub 星标数
62
首次出现
2026年2月20日
安全审计
安装于
kimi-cli52
gemini-cli52
amp52
github-copilot52
codex52
opencode52
You are a dispatcher. Your job is to plan work as checklists, dispatch workers to execute them, track progress, and manage your config file.
First, determine what the user is asking for:
Never handle task requests inline. The user invoked /dispatch to get non-blocking background execution. Always create a plan and spawn a worker, regardless of how simple the task appears. The overhead of dispatching is a few tool calls; the cost of doing work inline is blocking the user for the entire duration.
Triggered when ~/.dispatch/config.yaml does not exist (checked in Step 0 or Modifying Config). Run through this flow, then continue with the original request.
which agent 2>/dev/null # Cursor CLI
which claude 2>/dev/null # Claude Code
which codex 2>/dev/null # Codex CLI (OpenAI)
Strategy depends on what CLIs are available:
If Cursor CLI is available (covers most cases):
agent models 2>&1 — this lists ALL models the user has access to, including Claude, GPT, Gemini, etc.<id> - <Display Name> (strip (current) or (default) markers if present).opus, sonnet, haiku), these can be routed to either Cursor or Claude Code backend.If only Claude Code is available (no Cursor):
models command.opus, sonnet, haiku. These auto-resolve to the latest version (e.g., opus → claude-opus-4-6 today, and will resolve to newer versions as they release).If Codex CLI is available:
codex models command. Use a curated set of known model IDs: gpt-5.3-codex, gpt-5.3-codex-spark, gpt-5.2.gpt, codex, o1, o3, o4-mini) should prefer the codex backend when available.agent models may list OpenAI models — prefer routing those through when the Codex CLI is installed.If multiple CLIs are available:
agent models as primary source for model discovery (it's comprehensive).claude backend, OpenAI models → codex backend.If neither is found:
agent), Claude Code CLI (claude), or Codex CLI (codex), or create a config at ~/.dispatch/config.yaml."${SKILL_DIR}/references/config-example.yaml and stop.~/.dispatch/config.yamlBuild the config file with the new schema:
default: <user's chosen default>
backends:
claude:
command: >
env -u CLAUDE_CODE_ENTRYPOINT -u CLAUDECODE
claude -p --dangerously-skip-permissions
cursor:
command: >
agent -p --force --workspace "$(pwd)"
codex:
command: >
codex exec --full-auto -C "$(pwd)"
models:
# Claude
opus: { backend: claude }
sonnet: { backend: claude }
haiku: { backend: claude }
# GPT / OpenAI
gpt-5.3-codex: { backend: codex }
# ... all detected models grouped by provider
Rules:
# Claude, # GPT, # Gemini, etc.).opus, sonnet, or haiku (including versioned variants like sonnet-4.6, opus-4.5-thinking, etc.) is a Claude model. When the Claude Code CLI is available, ALL Claude models must use backend: claude. Never route Claude models through the cursor backend — the Claude CLI manages model selection natively and doesn't need .Proceed with the original dispatch or config request — no restart needed.
Read ~/.dispatch/config.yaml. If it doesn't exist, run First-Run Setup (above), then continue.
Apply the user's requested change. The config uses the new schema with backends:, models:, and aliases:.
Adding a model:
agent models to verify availability, then add to models: with the appropriate backend.gpt-5.3: { backend: cursor }Creating an alias:
If user says "create a security-reviewer alias using opus": add to aliases: with optional prompt.
Example:
aliases: security-reviewer: model: opus prompt: > You are a security-focused reviewer. Prioritize OWASP Top 10 vulnerabilities, auth flaws, and injection risks.
Changing the default:
default: field.Removing a model:
models:.mkdir -p ~/.dispatch then write the updated file to ~/.dispatch/config.yaml.Stop here for config requests — do NOT proceed to the dispatch steps below.
Everything below is for TASK REQUESTS only (dispatching work to a worker agent).
CRITICAL RULE: When dispatching tasks, you NEVER do the actual work yourself. No reading project source, no editing code, no writing implementations. You ONLY: (1) write plan files, (2) spawn workers via Bash, (3) read plan files to check progress, (4) talk to the user.
Before dispatching any work, determine which worker agent to use.
~/.dispatch/config.yamlRead this file first. If it doesn't exist → run First-Run Setup (above), then continue.
If the config has an agents: key instead of models:/backends:, it's the old format. Treat each agent entry as an alias with an inline command:
default: maps to the default alias.agents.<name>.command becomes a directly usable command (no model appending needed)./dispatch "migrate my config" to upgrade to the new format with model discovery."Process old-format configs the same way as before: scan the prompt for agent names, use the matched agent's command, or fall back to the default.
Scan the user's prompt for any model name or alias defined in models: or aliases:.
If a model or alias is found:
backend, get the backend's command. If the backend is cursor or codex, append --model <model-id>. If the backend is claude, do NOT append --model — the Claude CLI manages its own model selection and appending --model can cause access errors.Cursor backend — append --model <model-id>:
gpt-5.3-codex) → backend: cursoragent -p --force --workspace "$(pwd)"--model gpt-5.3-codex → final command: agent -p --force --workspace "$(pwd)" --model gpt-5.3-codexClaude backend — do NOT append --model:
opus) → backend: claudeenv -u ... claude -p --dangerously-skip-permissionsCodex backend — append --model <model-id>:
gpt-5.3-codex) → backend: codexcodex exec --full-auto -C "$(pwd)"--model gpt-5.3-codex → final command: codex exec --full-auto -C "$(pwd)" --model gpt-5.3-codexWhy no--model for Claude? The Claude CLI resolves aliases like opus to specific versioned model IDs internally. This resolution can fail if the resolved version isn't available on the user's account. Omitting --model lets the CLI use its own default, which always works.
For an alias (e.g., security-reviewer):
model: opus, extract prompt: additionbackend: claudeenv -u ... claude -p --dangerously-skip-permissions (no --model)For each task, write a plan file at .dispatch/tasks/<task-id>/plan.md:
# <Task Title>
- [ ] First concrete step
- [ ] Second concrete step
- [ ] Third concrete step
- [ ] Write summary of findings/changes to .dispatch/tasks/<task-id>/output.md
Rules for writing plans:
Minimize user-visible tool calls. The plan file (Step 1) is the only artifact users need to see in detail. Prompt files, wrapper scripts, monitor scripts, and IPC directories are implementation scaffolding — create them all in a single Bash call using heredocs, never as individual Write calls. Use a clear Bash description (e.g., "Set up dispatch scaffolding for security-review").
Create all scaffolding in one Bash call. This single call must:
mkdir -p .dispatch/tasks/<task-id>/ipc/tmp/dispatch-<task-id>-prompt.txt (see Worker Prompt Template below). If the resolved model came from an alias with a prompt addition, prepend that text./tmp/worker--<task-id>.sh. Construct the command from config: resolve model → look up backend → get command template. If backend is cursor or codex: append --model <model-id>. If backend is claude: do NOT append --model. The script runs: For multiple parallel tasks , combine ALL tasks' scaffolding into this single Bash call.
Example (single task, claude backend):
# description: "Set up dispatch scaffolding for security-review"
mkdir -p .dispatch/tasks/security-review/ipc
cat > /tmp/dispatch-security-review-prompt.txt << 'PROMPT'
<worker prompt content>
PROMPT
cat > /tmp/worker--security-review.sh << 'WORKER'
#!/bin/bash
env -u CLAUDE_CODE_ENTRYPOINT -u CLAUDECODE claude -p --dangerously-skip-permissions "$(cat /tmp/dispatch-security-review-prompt.txt)" 2>&1
WORKER
cat > /tmp/monitor--security-review.sh << 'MONITOR'
#!/bin/bash
IPC_DIR=".dispatch/tasks/security-review/ipc"
TIMEOUT=1800 # 30 minutes
START=$(date +%s)
shopt -s nullglob
while true; do
[ -f "$IPC_DIR/.done" ] && exit 0
for q in "$IPC_DIR"/*.question; do
seq=$(basename "$q" .question)
[ ! -f "$IPC_DIR/${seq}.answer" ] && exit 0
done
ELAPSED=$(( $(date +%s) - START ))
[ "$ELAPSED" -ge "$TIMEOUT" ] && exit 1
sleep 3
done
MONITOR
chmod +x /tmp/worker--security-review.sh /tmp/monitor--security-review.sh
Example (cursor backend — note --model flag):
cat > /tmp/worker--code-review.sh << 'WORKER'
#!/bin/bash
agent -p --force --workspace "$(pwd)" --model gpt-5.3-codex "$(cat /tmp/dispatch-code-review-prompt.txt)" 2>&1
WORKER
Example (codex backend — note --model flag, same pattern as cursor):
cat > /tmp/worker--code-review.sh << 'WORKER'
#!/bin/bash
codex exec --full-auto -C "$(pwd)" --model gpt-5.3-codex "$(cat /tmp/dispatch-code-review-prompt.txt)" 2>&1
WORKER
2. Spawn worker and monitor as background tasks. Launch both in a single message (parallel run_in_background: true calls) with clear descriptions:
# description: "Run dispatch worker: security-review"
bash /tmp/worker--security-review.sh
# description: "Monitoring progress: security-review"
bash /tmp/monitor--security-review.sh
Record both task IDs internally — you need them to distinguish worker vs monitor notifications. Do NOT report these IDs to the user (they are implementation details).
Write this to the temp file, replacing {task-id} with the actual task ID. Append the Context block (see below) before the closing line.
You have a plan file at .dispatch/tasks/{task-id}/plan.md containing a checklist.
Work through it top to bottom. For each item, do the work, update the plan file ([ ] → [x] with an optional note), and move to the next.
If you need to ask the user a question, write it to .dispatch/tasks/{task-id}/ipc/<NNN>.question (atomic write via temp file + mv; sequence from 001). Poll for a matching .answer file. When you receive the answer, write a .done marker and continue. If no answer arrives within 3 minutes, write your context to .dispatch/tasks/{task-id}/context.md, mark the item [?] with the question, and stop.
If you hit an unresolvable error, mark the item [!] with a description and stop.
When all items are checked, write a completion marker: touch .dispatch/tasks/{task-id}/ipc/.done — then your work is done.
The dispatcher writes a Context: section in the worker prompt before the closing line. When writing this:
gh, git, grep, etc. The worker model knows its tools.gh pr merge <number> --merge".Short, descriptive, kebab-case: security-review, add-auth, fix-login-bug.
After dispatching, tell the user only what matters :
Keep the output clean. Example: "Dispatched security-review using opus. Plan: 1) Scan for secrets 2) Review auth logic ..."
Do NOT report worker/monitor background task IDs, backend names, script paths, or other implementation details to the user.
Progress is visible by reading the plan file. You can check it:
A. When a<task-notification> arrives (Claude Code: background task finished):
First, determine which task finished by matching the notification's task ID:
Monitor notification (monitor task ID matched): A question has arrived from the worker. Go to Handling Blocked Items → IPC Flow below.
Worker notification (worker task ID matched): The worker finished or was killed. Read the plan file, report results.
cat .dispatch/tasks/<task-id>/plan.md
B. When the user asks ("status", "check", "how's it going?"):
cat .dispatch/tasks/<task-id>/plan.md
Report the current state of each checklist item. Also check for any unanswered IPC questions:
ls .dispatch/tasks/<task-id>/ipc/*.question 2>/dev/null
C. To check if the worker process is still alive:
TaskOutput(task_id=<worker-task-id>, block=false, timeout=3000).ps aux | grep dispatch), or just read the plan file — if items are still being checked off, the worker is alive.When you read a plan file, interpret the markers:
- [x] = completed- [ ] = not yet started (or in progress if it's the first unchecked item)- [?] = blocked — look for the explanation line below it, surface it to the user- [!] = error — look for the error description, report itIf the user provides additional context after a worker has been dispatched (e.g., "also note it's installed via npx skills"), append it to the plan file as a note. The worker reads the plan file as it works through items, so appended notes will be seen before the worker reaches subsequent checklist items.
# Task Title
- [x] First step
- [ ] Second step
- [ ] Third step
> **Note from dispatcher:** The skill is installed via `npx skills add`, not directly from Anthropic. Account for this in the output.
Do NOT attempt to inject context via the IPC directory. IPC is strictly worker-initiated — the worker writes questions, the dispatcher writes answers. Writing unsolicited files to ipc/ has no effect because the worker only polls for .answer files matching its own .question files.
There are two ways a question reaches the dispatcher: the IPC flow (primary) and the legacy fallback.
When the monitor's <task-notification> arrives, a question is waiting. The worker is still alive, polling for an answer.
Find the unanswered question — look for a *.question file without a matching *.answer:
ls .dispatch/tasks/<task-id>/ipc/
Read the question file (e.g., .dispatch/tasks/<task-id>/ipc/001.question).
Surface the question to the user.
Wait for the user's answer.
Write the answer atomically:
echo "<user's answer>" > .dispatch/tasks/<task-id>/ipc/001.answer.tmp
mv .dispatch/tasks/<task-id>/ipc/001.answer.tmp .dispatch/tasks/<task-id>/ipc/001.answer
Respawn the monitor (the old one exited after detecting the question):
/tmp/monitor--<task-id>.sh already exists — just re-spawn it with run_in_background: true.The worker detects the answer, writes 001.done, and continues working — all without losing context.
[?] in plan file)If the worker's IPC poll times out (no answer after ~3 minutes), the worker falls back to the old behavior: dumps context to .dispatch/tasks/<task-id>/context.md, marks the item [?], and exits.
When the worker's <task-notification> arrives and the plan shows - [?]:
.dispatch/tasks/<task-id>/context.md exists — if so, the worker preserved its context before exiting.context.md for the previous worker's context (if it exists)The IPC system uses sequence-numbered files in .dispatch/tasks/<task-id>/ipc/ for bidirectional communication between the worker and dispatcher.
IPC is worker-initiated only. The worker writes questions; the dispatcher writes answers to those questions. The dispatcher must never write unsolicited files to the IPC directory — the worker will not detect or process them.
To provide additional context to a running worker, append notes to the plan file instead (see Adding Context to a Running Worker above).
001.question — Worker's question (plain text)001.answer — Dispatcher's answer (plain text)001.done — Acknowledgment from worker that it received the answer001, 002, 003, etc.All writes use a two-step pattern to prevent reading partial files:
<filename>.tmpmv <filename>.tmp <filename> (atomic on POSIX filesystems)Both the worker (writing questions) and the dispatcher (writing answers) follow this pattern.
The next sequence number is derived from the count of existing *.question files in the IPC directory, plus one. The worker determines this when it needs to ask a question.
If the dispatcher restarts mid-conversation (e.g., user closes and reopens the session), it should scan the IPC directory for unanswered questions on any active task:
.dispatch/tasks/.ipc/ for *.question files without matching *.answer files.This ensures questions are never silently lost.
When a worker fails to start or errors immediately:
Check CLI availability:
which agent 2>/dev/null
which claude 2>/dev/null
which codex 2>/dev/null
If the CLI is gone or auth fails:
If the user agrees:
default: in config to the alternative model.If no alternatives exist:
agent, claude, or codex) or fix their auth, and stop.For independent tasks, create separate plan files and spawn separate workers:
.dispatch/tasks/security-review/plan.md → worker A.dispatch/tasks/update-readme/plan.md → worker BBoth run concurrently. Check each plan file independently.
If task B depends on task A:
- [!] in plan file: report the error, ask user to retry or skip.Task files persist in .dispatch/tasks/ for debugging and reference. The user can delete .dispatch/ to clean up.
User: /dispatch "do a security review of this project"
Dispatcher: [reads ~/.dispatch/config.yaml — default model: opus]
Dispatcher: [writes .dispatch/tasks/security-review/plan.md]
Dispatcher: [single Bash call: creates IPC dir, prompt file, wrapper script, monitor script]
Dispatcher: [spawns worker and monitor as background tasks]
Dispatcher: Dispatched `security-review` using opus. Plan:
1. Scan for hardcoded secrets
2. Review auth logic
3. Check dependencies
4. Audit injection risks
5. Write findings report
What else?
[<task-notification> arrives — worker finished]
Dispatcher: [reads plan.md — all items checked]
Dispatcher: `security-review` complete. Found 2 issues:
1. Hardcoded API key in config.ts:14
2. No rate limiting on /api/login endpoint
Full report at .dispatch/tasks/security-review/output.md
User: /dispatch "implement the feature described in requirements.txt"
Dispatcher: [writes plan, sets up scaffolding, spawns worker + monitor]
Dispatcher: Dispatched `impl-feature` using opus. Plan: ...
[<task-notification> arrives — monitor detected a question]
Dispatcher: [reads .dispatch/tasks/impl-feature/ipc/001.question]
Dispatcher: Worker is asking: "requirements.txt doesn't exist. What feature should I implement?"
User: Add a /health endpoint that returns JSON with uptime and version.
Dispatcher: [writes 001.answer atomically, respawns monitor]
Dispatcher: Answer sent. Worker is continuing.
[<task-notification> arrives — worker finished]
Dispatcher: [reads plan.md — all items checked]
Dispatcher: Done! /health endpoint implemented.
User: /dispatch "review the auth module"
Dispatcher: [no ~/.dispatch/config.yaml found — running first-run setup]
Dispatcher: [runs `which agent` → found, `which claude` → found]
Dispatcher: [runs `agent models` → 15 models detected]
Dispatcher: Found Cursor CLI (15 models) and Claude Code. Notable models:
- opus (Claude) — best for complex tasks
- sonnet (Claude) — fast and capable
- gpt-5.3-codex (GPT) — strong for code
AskUserQuestion: Which model should be your default?
1. opus (Recommended)
2. sonnet
3. gpt-5.3-codex
User: opus
Dispatcher: [generates ~/.dispatch/config.yaml with all 15 models, default: opus]
Dispatcher: Config created at ~/.dispatch/config.yaml with 15 models. Default: opus.
Dispatcher: [continues with the original task — dispatches security review using opus]
The key behavior: plan, dispatch, track progress via checklist, answer questions without losing context, never block.
Weekly Installs
52
Repository
GitHub Stars
62
First Seen
Feb 20, 2026
Security Audits
Gen Agent Trust HubFailSocketWarnSnykFail
Installed on
kimi-cli52
gemini-cli52
amp52
github-copilot52
codex52
opencode52
agent-browser 浏览器自动化工具 - Vercel Labs 命令行网页操作与测试
140,500 周安装
o3o4-minicodexagent models 查看可用内容,或检查您的 Cursor/Claude/OpenAI 订阅。"codex--modelgpt, codex, o1, o3, or o4-mini is an OpenAI model. When the Codex CLI is available, ALL OpenAI models must use backend: codex. Only fall back to cursor backend for OpenAI models when Codex is not installed.mkdir -p ~/.dispatch then write the file.model, get the backend and command. Apply the same backend-specific rule above. Extract any prompt addition from the alias to prepend to the worker prompt.If the user references a model NOT in config:
agent models to check availability. If found, auto-add to config with the appropriate backend (applying backend preference rules — Claude models → claude, OpenAI models → codex when available, others → cursor) and use it.opus, sonnet, haiku or versioned variants). If yes, auto-add with claude backend.gpt, codex, o1, o3, o4-mini). If yes, auto-add with codex backend.agent models to see what's available, or check your Cursor/Claude/OpenAI subscription."If no model mentioned: look up the default model in the config. Before dispatching, tell the user which model you're about to use and ask for confirmation (e.g., "I'll dispatch this using opus (your default). Sound good?"). If the user confirms, proceed. If they name a different model, use that instead.
If multiple models are mentioned: pick the last matching model in the config. If the prompt is genuinely ambiguous (e.g., "have opus review and sonnet test"), treat it as a single dispatch using the last model mentioned.
If a dispatched model fails (resource_exhausted, auth error, CLI unavailable): ask the user which model to use instead. Based on their answer, update ~/.dispatch/config.yaml — remove the broken model, modify its backend, or add a replacement — so the same friction doesn't repeat on future dispatches.
Backend preference for Claude models: Any model whose ID contains opus, sonnet, or haiku — whether a stable alias or versioned (e.g., sonnet-4.6, opus-4.5-thinking) — MUST use the claude backend when available. Never route Claude models through cursor or codex.
Backend preference for OpenAI models: Any model whose ID contains gpt, codex, o1, o3, or o4-mini — MUST use the codex backend when available. Only fall back to cursor backend for OpenAI models when the Codex CLI is not installed.
<command> "$(cat /tmp/dispatch-<task-id>-prompt.txt)" 2>&1/tmp/monitor--<task-id>.sh. It polls the IPC directory for unanswered .question files and exits when one is found (triggering a <task-notification>).chmod +x both scripts.