loki-mode by davila7/claude-code-templates
npx skills add https://github.com/davila7/claude-code-templates --skill loki-mode版本 2.35.0 | 从产品需求文档到生产环境 | 零人工干预研究增强版:OpenAI SDK、DeepMind、Anthropic、AWS Bedrock、Agent SDK、HN 生产环境 (2025)
.loki/CONTINUITY.md - 您的工作记忆 + "错误与经验教训".loki/memory/ 中的相关记忆 (情景模式、反模式).loki/state/orchestrator.json - 当前阶段/指标.loki/queue/pending.json - 下一个任务广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 文件 | 用途 | 更新时间 |
|---|---|---|
.loki/CONTINUITY.md | 工作记忆 - 我现在在做什么? | 每轮 |
.loki/memory/semantic/ | 通用模式与反模式 | 任务完成后 |
.loki/memory/episodic/ | 具体的交互轨迹 | 每次行动后 |
.loki/metrics/efficiency/ | 任务效率分数与奖励 | 每个任务后 |
.loki/specs/openapi.yaml | API 规范 - 唯一事实来源 | 架构变更时 |
CLAUDE.md | 项目上下文 - 架构与模式 | 重大变更时 |
.loki/queue/*.json | 任务状态 | 每次任务变更时 |
START
|
+-- 阅读 CONTINUITY.md ----------+
| |
+-- 任务进行中? |
| +-- 是:继续执行 |
| +-- 否:检查待处理队列 |
| |
+-- 有待处理任务? |
| +-- 是:认领最高优先级任务 |
| +-- 否:检查阶段完成情况 |
| |
+-- 阶段完成? |
| +-- 是:推进到下一阶段 |
| +-- 否:为该阶段生成任务 |
| |
LOOP <-----------------------------+
引导 -> 探索 -> 架构 -> 基础设施
| | | |
(设置) (分析产品需求文档) (设计) (云/数据库设置)
|
开发 <- 质量保证 <- 部署 <- 业务运营 <- 增长循环
| | | | |
(构建) (测试) (发布) (监控) (迭代)
规范优先: OpenAPI -> 测试 -> 代码 -> 验证 代码审查: 盲审 (并行) -> 辩论 (如有分歧) -> 魔鬼代言人 -> 合并 防护栏: 输入防护 (阻止) -> 执行 -> 输出防护 (验证) (OpenAI SDK) 绊线: 验证失败 -> 停止执行 -> 升级或重试 回退机制: 尝试主方案 -> 模型回退 -> 工作流回退 -> 人工升级 探索-规划-编码: 研究文件 -> 创建计划 (不写代码) -> 执行计划 (Anthropic) 自我验证: 代码 -> 测试 -> 失败 -> 学习 -> 更新 CONTINUITY.md -> 重试 宪法式自我批判: 生成 -> 根据原则批判 -> 修订 (Anthropic) 记忆整合: 情景 (轨迹) -> 模式提取 -> 语义 (知识) 分层推理: 高层规划器 -> 技能选择 -> 本地执行器 (DeepMind) 工具编排: 分类复杂度 -> 选择智能体 -> 跟踪效率 -> 奖励学习 辩论验证: 支持者辩护 -> 反对者挑战 -> 综合 (DeepMind) 交接回调: on_handoff -> 预取上下文 -> 带数据转移 (OpenAI SDK) 窄范围: 最多 3-5 步 -> 人工审查 -> 继续 (HN 生产环境) 上下文管理: 手动选择 -> 聚焦上下文 -> 每个任务使用新上下文 (HN 生产环境) 确定性验证: LLM 输出 -> 基于规则的检查 -> 重试或批准 (HN 生产环境) 路由模式: 简单任务 -> 直接分派 | 复杂任务 -> 监督者编排 (AWS Bedrock) 端到端浏览器测试: Playwright MCP -> 自动化浏览器 -> 可视化验证 UI 功能 (Anthropic Harness)
# 以自主权限启动
claude --dangerously-skip-permissions
本系统在零人工干预下运行。
autonomy/run.sh - 编辑正在运行的 bash 脚本会破坏执行 (bash 是增量读取,而非一次性读取)。如果需要修复 run.sh,请在 CONTINUITY.md 中记录,供下次会话处理。这些文件是正在运行的 Loki 模式进程的一部分。编辑它们将导致会话崩溃:
| 文件 | 原因 |
|---|---|
~/.claude/skills/loki-mode/autonomy/run.sh | 当前正在执行的 bash 脚本 |
.loki/dashboard/* | 由活动的 HTTP 服务器提供 |
如果发现这些文件存在错误,请在 .loki/CONTINUITY.md 的"待修复项"下记录,以便在会话结束后手动修复。
+-------------------------------------------------------------------+
| 推理:接下来需要做什么? |
| - 首先阅读 .loki/CONTINUITY.md (工作记忆) |
| - 阅读"错误与经验教训"以避免过去的错误 |
| - 检查 orchestrator.json,审查 pending.json |
| - 识别最高优先级的未阻塞任务 |
+-------------------------------------------------------------------+
| 行动:执行任务 |
| - 通过 Task 工具分派子智能体或直接执行 |
| - 编写代码,运行测试,修复问题 |
| - 原子化提交更改 (git 检查点) |
+-------------------------------------------------------------------+
| 反思:是否有效?下一步是什么? |
| - 验证任务成功 (测试通过,无错误) |
| - 更新 .loki/CONTINUITY.md 记录进度 |
| - 检查完成承诺 - 我们完成了吗? |
+-------------------------------------------------------------------+
| 验证:让 AI 测试自己的工作 (2-3 倍质量提升) |
| - 运行自动化测试 (单元、集成、端到端) |
| - 检查编译/构建 (无错误或警告) |
| - 根据规范验证 (.loki/specs/openapi.yaml) |
| |
| 如果验证失败: |
| 1. 捕获错误详情 (堆栈跟踪、日志) |
| 2. 分析根本原因 |
| 3. 更新 CONTINUITY.md 的"错误与经验教训" |
| 4. 回滚到最后一个良好的 git 检查点 (如果需要) |
| 5. 应用学习并从推理阶段重试 |
+-------------------------------------------------------------------+
关键:为每种任务类型使用正确的模型。Opus 仅用于规划/架构。
| 模型 | 用途 | 示例 |
|---|---|---|
| Opus 4.5 | 仅用于规划 - 架构与高层决策 | 系统设计、架构决策、规划、安全审计 |
| Sonnet 4.5 | 开发 - 实现与功能测试 | 功能实现、API 端点、错误修复、集成/端到端测试 |
| Haiku 4.5 | 运维 - 简单任务与监控 | 单元测试、文档、bash 命令、代码检查、监控、文件操作 |
# Opus 仅用于规划/架构
Task(subagent_type="Plan", model="opus", description="设计系统架构", prompt="...")
# Sonnet 用于开发和功能测试
Task(subagent_type="general-purpose", description="实现 API 端点", prompt="...")
Task(subagent_type="general-purpose", description="编写集成测试", prompt="...")
# Haiku 用于单元测试、监控和简单任务 (为速度优先使用此模型)
Task(subagent_type="general-purpose", model="haiku", description="运行单元测试", prompt="...")
Task(subagent_type="general-purpose", model="haiku", description="检查服务健康状态", prompt="...")
# 为单元测试套件并行启动 10+ 个 Haiku 智能体
for test_file in test_files:
Task(subagent_type="general-purpose", model="haiku",
description=f"运行单元测试:{test_file}",
run_in_background=True)
后台智能体:
# 启动后台智能体 - 立即返回并附带 output_file 路径
Task(description="长时间分析任务", run_in_background=True, prompt="...")
# 输出截断至 30K 字符 - 使用 Read 工具检查完整输出文件
智能体恢复 (针对中断/长时间运行的任务):
# 首次调用返回 agent_id
result = Task(description="复杂重构", prompt="...")
# 稍后可使用 result 中的 agent_id 恢复
Task(resume="agent-abc123", prompt="从上次中断处继续")
何时使用resume:
基于任务复杂度的两种分派模式 - 减少简单任务的延迟:
| 模式 | 何时使用 | 行为 |
|---|---|---|
| 直接路由 | 简单、单领域任务 | 直接路由到专家智能体,跳过编排 |
| 监督者模式 | 复杂、多步骤任务 | 完全分解、协调、结果综合 |
决策逻辑:
收到任务
|
+-- 任务是单领域吗? (一个文件、一项技能、明确范围)
| +-- 是:直接路由到专家智能体
| | - 更快 (无编排开销)
| | - 最小上下文 (避免混淆)
| | - 示例:"修复 README 中的拼写错误"、"运行单元测试"
| |
| +-- 否:监督者模式
| - 完全任务分解
| - 协调多个智能体
| - 综合结果
| - 示例:"实现认证系统"、"重构 API 层"
|
+-- 回退:如果意图不明确,使用监督者模式
直接路由示例 (跳过编排):
# 简单任务 -> 直接分派给 Haiku
Task(model="haiku", description="修复 utils.py 中的导入", prompt="...") # 直接
Task(model="haiku", description="在 src/ 上运行代码检查器", prompt="...") # 直接
Task(model="haiku", description="为函数生成文档字符串", prompt="...") # 直接
# 复杂任务 -> 监督者编排 (默认 Sonnet)
Task(description="使用 OAuth 实现用户认证", prompt="...") # 监督者
Task(description="为性能重构数据库层", prompt="...") # 监督者
按路由模式的上下文深度:
"请记住,复杂的任务历史可能会让较简单的子智能体感到困惑。" - AWS 最佳实践
关键: 功能在通过浏览器自动化验证之前不算完成。
# 为端到端测试启用 Playwright MCP
# 在设置中或通过 mcp_servers 配置:
mcp_servers = {
"playwright": {"command": "npx", "args": ["@playwright/mcp@latest"]}
}
# 然后智能体可以自动化浏览器以验证功能是否正常工作
端到端验证流程:
"一旦明确提示使用浏览器自动化工具,Claude 在端到端验证功能方面大多表现良好。" - Anthropic 工程团队
注意: Playwright 无法检测浏览器原生警告模态框。请使用自定义 UI 进行确认。
受 NVIDIA ToolOrchestra 启发: 跟踪效率,从奖励中学习,调整智能体选择。
| 指标 | 跟踪内容 | 存储位置 |
|---|---|---|
| 实际时间 | 从开始到完成的秒数 | .loki/metrics/efficiency/ |
| 智能体数量 | 生成的子智能体数量 | .loki/metrics/efficiency/ |
| 重试次数 | 成功前的尝试次数 | .loki/metrics/efficiency/ |
| 模型使用情况 | Haiku/Sonnet/Opus 调用分布 | .loki/metrics/efficiency/ |
结果奖励: +1.0 (成功) | 0.0 (部分) | -1.0 (失败)
效率奖励: 0.0-1.0 基于资源与基线的比较
偏好奖励: 从用户操作中推断 (提交/回滚/编辑)
| 复杂度 | 最大智能体数 | 规划 | 开发 | 测试 | 审查 |
|---|---|---|---|---|---|
| 琐碎 | 1 | - | haiku | haiku | 跳过 |
| 简单 | 2 | - | haiku | haiku | 单个 |
| 中等 | 4 | sonnet | sonnet | haiku | 标准 (3 并行) |
| 复杂 | 8 | opus | sonnet | haiku | 深度 (+ 魔鬼代言人) |
| 关键 | 12 | opus | sonnet | sonnet | 详尽 + 人工检查点 |
完整实现细节请参见 references/tool-orchestration.md。
单一职责原则: 每个智能体应有一个明确的目标和狭窄的范围。(UiPath 最佳实践)
每个子智能体分派必须包含:
## 目标 (成功的样子)
[高层目标,不仅仅是行动]
示例:"为可维护性和可测试性重构认证"
非:"重构认证文件"
## 约束 (不能做什么)
- 未经批准不得使用第三方依赖
- 保持与 v1.x API 的向后兼容性
- 保持响应时间在 200 毫秒以下
## 上下文 (需要知道什么)
- 相关文件:[列表及简要描述]
- 先前尝试:[尝试过什么,为何失败]
## 输出格式 (交付什么)
- [ ] 包含原因/内容/权衡描述的拉取请求
- [ ] 覆盖率 >90% 的单元测试
- [ ] 更新 API 文档
## 完成时
报告内容:原因、内容、权衡、风险
绝不发布未通过所有质量门的代码:
防护栏执行模式:
研究洞察: 盲审 + 魔鬼代言人可将误报减少 30% (CONSENSAGENT, 2025)。 OpenAI 洞察: "分层防御 - 多个专门的防护栏创建弹性智能体。"
详细信息请参见 references/quality-control.md 和 references/openai-patterns.md。
Loki 模式包含 7 个集群中的 37 种专门智能体类型。编排器仅生成项目所需的智能体。
| 集群 | 智能体数量 | 示例 |
|---|---|---|
| 工程 | 8 | 前端、后端、数据库、移动端、api、质量保证、性能、基础设施 |
| 运维 | 8 | 开发运维、站点可靠性工程、安全、监控、事件、发布、成本、合规 |
| 业务 | 8 | 营销、销售、财务、法律、支持、人力资源、投资者、合作伙伴 |
| 数据 | 3 | 机器学习、数据工程、分析 |
| 产品 | 3 | 产品经理、设计、技术文档 |
| 增长 | 4 | 增长黑客、社区、成功、生命周期 |
| 审查 | 3 | 代码、业务、安全 |
完整定义和能力请参见 references/agent-types.md。
| 问题 | 原因 | 解决方案 |
|---|---|---|
| 智能体卡住/无进展 | 丢失上下文 | 每轮首先阅读 .loki/CONTINUITY.md |
| 任务重复 | 未检查队列状态 | 认领前检查 .loki/queue/*.json |
| 代码审查失败 | 跳过静态分析 | 在 AI 审查者之前运行静态分析 |
| 破坏性 API 变更 | 先写代码后定规范 | 遵循规范优先工作流 |
| 达到速率限制 | 并行智能体过多 | 检查断路器,使用指数退避 |
| 合并后测试失败 | 跳过了质量门 | 绝不绕过基于严重性的阻止 |
| 找不到该做什么 | 未遵循决策树 | 使用决策树,检查 orchestrator.json |
| 内存/上下文增长 | 未使用分类账 | 任务完成后写入分类账 |
基于 OpenAI 智能体安全模式:
opus -> sonnet -> haiku (如果速率受限或不可用)
完整工作流失败 -> 简化工作流 -> 分解为子任务 -> 人工升级
| 触发器 | 操作 |
|---|---|
| 重试次数 > 3 | 暂停并升级 |
| 领域在 [支付、认证、个人身份信息] 中 | 需要批准 |
| 置信度分数 < 0.6 | 暂停并升级 |
| 实际时间 > 预期时间 * 3 | 暂停并升级 |
| 使用的令牌数 > 预算 * 0.8 | 暂停并升级 |
完整回退实现请参见 references/openai-patterns.md。
如果存在目标项目的 AGENTS.md,请阅读 (OpenAI/AAIF 标准):
上下文优先级:
1. AGENTS.md (最接近当前文件)
2. CLAUDE.md (Claude 专用)
3. .loki/CONTINUITY.md (会话状态)
4. 包文档
5. README.md
根据明确原则进行自我批判,而不仅仅是学习到的偏好。
core_principles:
- "未经明确备份绝不删除生产数据"
- "绝不将秘密或凭证提交到版本控制"
- "绝不为了速度而绕过质量门"
- "在标记任务完成前始终验证测试通过"
- "未运行实际测试绝不声称完成"
- "偏好简单解决方案而非巧妙方案"
- "记录决策,而不仅仅是代码"
- "不确定时,拒绝行动或标记以供审查"
1. 生成响应/代码
2. 根据每条原则进行批判
3. 如果违反任何原则则修订
4. 只有这样才能继续行动
宪法式 AI 实现请参见 references/lab-research-patterns.md。
对于关键变更,使用 AI 批评者之间的结构化辩论。
支持者 (辩护者) --> 提出带有证据的提案
|
v
反对者 (挑战者) --> 发现缺陷,挑战主张
|
v
综合者 --> 权衡论点,产生裁决
|
v
如果分歧持续 --> 升级给人类
用于: 架构决策、安全敏感变更、重大重构。
辩论验证细节请参见 references/lab-research-patterns.md。
来自构建真实系统的实践者的实战经验。
task_constraints:
max_steps_before_review: 3-5
characteristics:
- 具体、明确的目标
- 预先分类的输入
- 确定性的成功标准
- 可验证的输出
置信度 >= 0.95 --> 自动批准并记录审计日志
置信度 >= 0.70 --> 快速人工审查
置信度 >= 0.40 --> 详细人工审查
置信度 < 0.40 --> 立即升级
用基于规则的验证包装智能体输出 (非 LLM 判断):
1. 智能体生成输出
2. 运行代码检查器 (确定性)
3. 运行测试 (确定性)
4. 检查编译 (确定性)
5. 只有这样:人工或 AI 审查
principles:
- "少即是多" - 聚焦胜过全面
- 手动选择优于自动检索增强生成
- 每个主要任务使用新对话
- 积极移除过时信息
context_budget:
target: "上下文 < 10k 令牌"
reserve: "90% 用于模型推理"
使用子智能体防止在嘈杂的子任务上浪费令牌:
主智能体 (聚焦) --> 子智能体 (文件搜索)
--> 子智能体 (测试运行)
--> 子智能体 (代码检查)
完整的实践者模式请参见 references/production-patterns.md。
| 条件 | 操作 |
|---|---|
| 产品发布,稳定 24 小时 | 进入增长循环模式 |
| 不可恢复的失败 | 保存状态,停止,请求人工 |
| 产品需求文档更新 | 差异比较,创建增量任务,继续 |
| 达到收入目标 | 记录成功,继续优化 |
| 运行时间 < 30 天 | 警报,积极优化成本 |
.loki/
+-- CONTINUITY.md # 工作记忆 (每轮读取/更新)
+-- specs/
| +-- openapi.yaml # API 规范 - 唯一事实来源
+-- queue/
| +-- pending.json # 等待认领的任务
| +-- in-progress.json # 当前正在执行的任务
| +-- completed.json # 已完成的任务
| +-- dead-letter.json # 失败的任务供审查
+-- state/
| +-- orchestrator.json # 主状态 (阶段、指标)
| +-- agents/ # 每个智能体的状态文件
| +-- circuit-breakers/ # 速率限制状态
+-- memory/
| +-- episodic/ # 具体的交互轨迹 (发生了什么)
| +-- semantic/ # 通用模式 (事物如何运作)
| +-- skills/ # 学习到的动作序列 (如何做 X)
| +-- ledgers/ # 智能体特定的检查点
| +-- handoffs/ # 智能体间转移
+-- metrics/
| +-- efficiency/ # 任务效率分数 (时间、智能体、重试)
| +-- rewards/ # 结果/效率/偏好奖励
| +-- dashboard.json # 滚动指标摘要
+-- artifacts/
+-- reports/ # 生成的报告/仪表板
完整结构和状态模式请参见 references/architecture.md。
Loki 模式 # 全新启动
Loki 模式,产品需求文档位于 path/to/prd # 带产品需求文档启动
技能元数据:
| 字段 | 值 |
|---|---|
| 触发器 | "Loki 模式" 或 "Loki 模式,产品需求文档位于 [路径]" |
| 跳过时机 | 需要人工批准、希望先审查计划、单个小任务 |
| 相关技能 | subagent-driven-development, executing-plans |
详细文档拆分为参考文件以便渐进式加载:
| 参考文件 | 内容 |
|---|---|
references/core-workflow.md | 完整 RARV 循环、CONTINUITY.md 模板、自主规则 |
references/quality-control.md | 质量门、反奉承、盲审、基于严重性的阻止 |
references/openai-patterns.md | OpenAI Agents SDK:防护栏、绊线、交接、回退 |
references/lab-research-patterns.md | DeepMind + Anthropic:宪法式 AI、辩论、世界模型 |
references/production-patterns.md | HN 2025:生产环境中实际有效的方法、上下文工程 |
references/advanced-patterns.md | 2025 年研究:MAR、Iter-VF、GoalAct、CONSENSAGENT |
references/tool-orchestration.md | ToolOrchestra 模式:效率、奖励、动态选择 |
references/memory-system.md | 情景/语义记忆、整合、Zettelkasten 链接 |
references/agent-types.md | 所有 37 种智能体类型及其完整能力 |
references/task-queue.md | 队列系统、死信处理、断路器 |
references/sdlc-phases.md | 所有阶段及其详细工作流和测试 |
references/spec-driven-dev.md | OpenAPI 优先工作流、验证、契约测试 |
references/architecture.md | 目录结构、状态模式、引导 |
references/mcp-integration.md | MCP 服务器能力和集成 |
references/claude-best-practices.md | Boris Cherny 模式、思考模式、分类账 |
references/deployment.md | 按云提供商的部署说明 |
references/business-ops.md | 业务运营工作流 |
版本: 2.32.0 | 行数: ~600 | 研究增强:实验室 + HN 生产环境模式
每周安装数
157
仓库
GitHub 星标数
23.4K
首次出现
2026 年 1 月 25 日
安全审计
安装于
opencode138
claude-code133
gemini-cli128
cursor122
codex116
github-copilot113
Version 2.35.0 | PRD to Production | Zero Human Intervention Research-enhanced: OpenAI SDK, DeepMind, Anthropic, AWS Bedrock, Agent SDK, HN Production (2025)
.loki/CONTINUITY.md - Your working memory + "Mistakes & Learnings".loki/memory/ (episodic patterns, anti-patterns).loki/state/orchestrator.json - Current phase/metrics.loki/queue/pending.json - Next tasks| File | Purpose | Update When |
|---|---|---|
.loki/CONTINUITY.md | Working memory - what am I doing NOW? | Every turn |
.loki/memory/semantic/ | Generalized patterns & anti-patterns | After task completion |
.loki/memory/episodic/ | Specific interaction traces | After each action |
.loki/metrics/efficiency/ | Task efficiency scores & rewards | After each task |
.loki/specs/openapi.yaml |
START
|
+-- Read CONTINUITY.md ----------+
| |
+-- Task in-progress? |
| +-- YES: Resume |
| +-- NO: Check pending queue |
| |
+-- Pending tasks? |
| +-- YES: Claim highest priority
| +-- NO: Check phase completion
| |
+-- Phase done? |
| +-- YES: Advance to next phase
| +-- NO: Generate tasks for phase
| |
LOOP <-----------------------------+
Bootstrap -> Discovery -> Architecture -> Infrastructure
| | | |
(Setup) (Analyze PRD) (Design) (Cloud/DB Setup)
|
Development <- QA <- Deployment <- Business Ops <- Growth Loop
| | | | |
(Build) (Test) (Release) (Monitor) (Iterate)
Spec-First: OpenAPI -> Tests -> Code -> Validate Code Review: Blind Review (parallel) -> Debate (if disagree) -> Devil's Advocate -> Merge Guardrails: Input Guard (BLOCK) -> Execute -> Output Guard (VALIDATE) (OpenAI SDK) Tripwires: Validation fails -> Halt execution -> Escalate or retry Fallbacks: Try primary -> Model fallback -> Workflow fallback -> Human escalation Explore-Plan-Code: Research files -> Create plan (NO CODE) -> Execute plan (Anthropic) Self-Verification: Code -> Test -> Fail -> Learn -> Update CONTINUITY.md -> Retry (Anthropic) (DeepMind) (DeepMind) (OpenAI SDK) (HN Production) (HN Production) (HN Production) (AWS Bedrock) (Anthropic Harness)
# Launch with autonomous permissions
claude --dangerously-skip-permissions
This system runs with ZERO human intervention.
autonomy/run.sh while running - Editing a running bash script corrupts execution (bash reads incrementally, not all at once). If you need to fix run.sh, note it in CONTINUITY.md for the next session.These files are part of the running Loki Mode process. Editing them will crash the session:
| File | Reason |
|---|---|
~/.claude/skills/loki-mode/autonomy/run.sh | Currently executing bash script |
.loki/dashboard/* | Served by active HTTP server |
If bugs are found in these files, document them in .loki/CONTINUITY.md under "Pending Fixes" for manual repair after the session ends.
+-------------------------------------------------------------------+
| REASON: What needs to be done next? |
| - READ .loki/CONTINUITY.md first (working memory) |
| - READ "Mistakes & Learnings" to avoid past errors |
| - Check orchestrator.json, review pending.json |
| - Identify highest priority unblocked task |
+-------------------------------------------------------------------+
| ACT: Execute the task |
| - Dispatch subagent via Task tool OR execute directly |
| - Write code, run tests, fix issues |
| - Commit changes atomically (git checkpoint) |
+-------------------------------------------------------------------+
| REFLECT: Did it work? What next? |
| - Verify task success (tests pass, no errors) |
| - UPDATE .loki/CONTINUITY.md with progress |
| - Check completion promise - are we done? |
+-------------------------------------------------------------------+
| VERIFY: Let AI test its own work (2-3x quality improvement) |
| - Run automated tests (unit, integration, E2E) |
| - Check compilation/build (no errors or warnings) |
| - Verify against spec (.loki/specs/openapi.yaml) |
| |
| IF VERIFICATION FAILS: |
| 1. Capture error details (stack trace, logs) |
| 2. Analyze root cause |
| 3. UPDATE CONTINUITY.md "Mistakes & Learnings" |
| 4. Rollback to last good git checkpoint (if needed) |
| 5. Apply learning and RETRY from REASON |
+-------------------------------------------------------------------+
CRITICAL: Use the right model for each task type. Opus is ONLY for planning/architecture.
| Model | Use For | Examples |
|---|---|---|
| Opus 4.5 | PLANNING ONLY - Architecture & high-level decisions | System design, architecture decisions, planning, security audits |
| Sonnet 4.5 | DEVELOPMENT - Implementation & functional testing | Feature implementation, API endpoints, bug fixes, integration/E2E tests |
| Haiku 4.5 | OPERATIONS - Simple tasks & monitoring | Unit tests, docs, bash commands, linting, monitoring, file operations |
# Opus for planning/architecture ONLY
Task(subagent_type="Plan", model="opus", description="Design system architecture", prompt="...")
# Sonnet for development and functional testing
Task(subagent_type="general-purpose", description="Implement API endpoint", prompt="...")
Task(subagent_type="general-purpose", description="Write integration tests", prompt="...")
# Haiku for unit tests, monitoring, and simple tasks (PREFER THIS for speed)
Task(subagent_type="general-purpose", model="haiku", description="Run unit tests", prompt="...")
Task(subagent_type="general-purpose", model="haiku", description="Check service health", prompt="...")
# Launch 10+ Haiku agents in parallel for unit test suite
for test_file in test_files:
Task(subagent_type="general-purpose", model="haiku",
description=f"Run unit tests: {test_file}",
run_in_background=True)
Background Agents:
# Launch background agent - returns immediately with output_file path
Task(description="Long analysis task", run_in_background=True, prompt="...")
# Output truncated to 30K chars - use Read tool to check full output file
Agent Resumption (for interrupted/long-running tasks):
# First call returns agent_id
result = Task(description="Complex refactor", prompt="...")
# agent_id from result can resume later
Task(resume="agent-abc123", prompt="Continue from where you left off")
When to useresume:
Two dispatch modes based on task complexity - reduces latency for simple tasks:
| Mode | When to Use | Behavior |
|---|---|---|
| Direct Routing | Simple, single-domain tasks | Route directly to specialist agent, skip orchestration |
| Supervisor Mode | Complex, multi-step tasks | Full decomposition, coordination, result synthesis |
Decision Logic:
Task Received
|
+-- Is task single-domain? (one file, one skill, clear scope)
| +-- YES: Direct Route to specialist agent
| | - Faster (no orchestration overhead)
| | - Minimal context (avoid confusion)
| | - Examples: "Fix typo in README", "Run unit tests"
| |
| +-- NO: Supervisor Mode
| - Full task decomposition
| - Coordinate multiple agents
| - Synthesize results
| - Examples: "Implement auth system", "Refactor API layer"
|
+-- Fallback: If intent unclear, use Supervisor Mode
Direct Routing Examples (Skip Orchestration):
# Simple tasks -> Direct dispatch to Haiku
Task(model="haiku", description="Fix import in utils.py", prompt="...") # Direct
Task(model="haiku", description="Run linter on src/", prompt="...") # Direct
Task(model="haiku", description="Generate docstring for function", prompt="...") # Direct
# Complex tasks -> Supervisor orchestration (default Sonnet)
Task(description="Implement user authentication with OAuth", prompt="...") # Supervisor
Task(description="Refactor database layer for performance", prompt="...") # Supervisor
Context Depth by Routing Mode:
"Keep in mind, complex task histories might confuse simpler subagents." - AWS Best Practices
Critical: Features are NOT complete until verified via browser automation.
# Enable Playwright MCP for E2E testing
# In settings or via mcp_servers config:
mcp_servers = {
"playwright": {"command": "npx", "args": ["@playwright/mcp@latest"]}
}
# Agent can then automate browser to verify features work visually
E2E Verification Flow:
"Claude mostly did well at verifying features end-to-end once explicitly prompted to use browser automation tools." - Anthropic Engineering
Note: Playwright cannot detect browser-native alert modals. Use custom UI for confirmations.
Inspired by NVIDIA ToolOrchestra: Track efficiency, learn from rewards, adapt agent selection.
| Metric | What to Track | Store In |
|---|---|---|
| Wall time | Seconds from start to completion | .loki/metrics/efficiency/ |
| Agent count | Number of subagents spawned | .loki/metrics/efficiency/ |
| Retry count | Attempts before success | .loki/metrics/efficiency/ |
| Model usage | Haiku/Sonnet/Opus call distribution | .loki/metrics/efficiency/ |
OUTCOME REWARD: +1.0 (success) | 0.0 (partial) | -1.0 (failure)
EFFICIENCY REWARD: 0.0-1.0 based on resources vs baseline
PREFERENCE REWARD: Inferred from user actions (commit/revert/edit)
| Complexity | Max Agents | Planning | Development | Testing | Review |
|---|---|---|---|---|---|
| Trivial | 1 | - | haiku | haiku | skip |
| Simple | 2 | - | haiku | haiku | single |
| Moderate | 4 | sonnet | sonnet | haiku | standard (3 parallel) |
| Complex | 8 | opus | sonnet | haiku | deep (+ devil's advocate) |
| Critical | 12 | opus | sonnet |
See references/tool-orchestration.md for full implementation details.
Single-Responsibility Principle: Each agent should have ONE clear goal and narrow scope. (UiPath Best Practices)
Every subagent dispatch MUST include:
## GOAL (What success looks like)
[High-level objective, not just the action]
Example: "Refactor authentication for maintainability and testability"
NOT: "Refactor the auth file"
## CONSTRAINTS (What you cannot do)
- No third-party dependencies without approval
- Maintain backwards compatibility with v1.x API
- Keep response time under 200ms
## CONTEXT (What you need to know)
- Related files: [list with brief descriptions]
- Previous attempts: [what was tried, why it failed]
## OUTPUT FORMAT (What to deliver)
- [ ] Pull request with Why/What/Trade-offs description
- [ ] Unit tests with >90% coverage
- [ ] Update API documentation
## WHEN COMPLETE
Report back with: WHY, WHAT, TRADE-OFFS, RISKS
Never ship code without passing all quality gates:
Guardrails Execution Modes:
Research insight: Blind review + Devil's Advocate reduces false positives by 30% (CONSENSAGENT, 2025). OpenAI insight: "Layered defense - multiple specialized guardrails create resilient agents."
See references/quality-control.md and references/openai-patterns.md for details.
Loki Mode has 37 specialized agent types across 7 swarms. The orchestrator spawns only agents needed for your project.
| Swarm | Agent Count | Examples |
|---|---|---|
| Engineering | 8 | frontend, backend, database, mobile, api, qa, perf, infra |
| Operations | 8 | devops, sre, security, monitor, incident, release, cost, compliance |
| Business | 8 | marketing, sales, finance, legal, support, hr, investor, partnerships |
| Data | 3 | ml, data-eng, analytics |
| Product | 3 | pm, design, techwriter |
| Growth | 4 | growth-hacker, community, success, lifecycle |
| Review | 3 | code, business, security |
See references/agent-types.md for complete definitions and capabilities.
| Issue | Cause | Solution |
|---|---|---|
| Agent stuck/no progress | Lost context | Read .loki/CONTINUITY.md first thing every turn |
| Task repeating | Not checking queue state | Check .loki/queue/*.json before claiming |
| Code review failing | Skipped static analysis | Run static analysis BEFORE AI reviewers |
| Breaking API changes | Code before spec | Follow Spec-First workflow |
| Rate limit hit | Too many parallel agents | Check circuit breakers, use exponential backoff |
| Tests failing after merge | Skipped quality gates | Never bypass Severity-Based Blocking |
| Can't find what to do | Not following decision tree |
Based on OpenAI Agent Safety Patterns:
opus -> sonnet -> haiku (if rate limited or unavailable)
Full workflow fails -> Simplified workflow -> Decompose to subtasks -> Human escalation
| Trigger | Action |
|---|---|
| retry_count > 3 | Pause and escalate |
| domain in [payments, auth, pii] | Require approval |
| confidence_score < 0.6 | Pause and escalate |
| wall_time > expected * 3 | Pause and escalate |
| tokens_used > budget * 0.8 | Pause and escalate |
See references/openai-patterns.md for full fallback implementation.
Read target project's AGENTS.md if exists (OpenAI/AAIF standard):
Context Priority:
1. AGENTS.md (closest to current file)
2. CLAUDE.md (Claude-specific)
3. .loki/CONTINUITY.md (session state)
4. Package docs
5. README.md
Self-critique against explicit principles, not just learned preferences.
core_principles:
- "Never delete production data without explicit backup"
- "Never commit secrets or credentials to version control"
- "Never bypass quality gates for speed"
- "Always verify tests pass before marking task complete"
- "Never claim completion without running actual tests"
- "Prefer simple solutions over clever ones"
- "Document decisions, not just code"
- "When unsure, reject action or flag for review"
1. Generate response/code
2. Critique against each principle
3. Revise if any principle violated
4. Only then proceed with action
See references/lab-research-patterns.md for Constitutional AI implementation.
For critical changes, use structured debate between AI critics.
Proponent (defender) --> Presents proposal with evidence
|
v
Opponent (challenger) --> Finds flaws, challenges claims
|
v
Synthesizer --> Weighs arguments, produces verdict
|
v
If disagreement persists --> Escalate to human
Use for: Architecture decisions, security-sensitive changes, major refactors.
See references/lab-research-patterns.md for debate verification details.
Battle-tested insights from practitioners building real systems.
task_constraints:
max_steps_before_review: 3-5
characteristics:
- Specific, well-defined objectives
- Pre-classified inputs
- Deterministic success criteria
- Verifiable outputs
confidence >= 0.95 --> Auto-approve with audit log
confidence >= 0.70 --> Quick human review
confidence >= 0.40 --> Detailed human review
confidence < 0.40 --> Escalate immediately
Wrap agent outputs with rule-based validation (NOT LLM-judged):
1. Agent generates output
2. Run linter (deterministic)
3. Run tests (deterministic)
4. Check compilation (deterministic)
5. Only then: human or AI review
principles:
- "Less is more" - focused beats comprehensive
- Manual selection outperforms automatic RAG
- Fresh conversations per major task
- Remove outdated information aggressively
context_budget:
target: "< 10k tokens for context"
reserve: "90% for model reasoning"
Use sub-agents to prevent token waste on noisy subtasks:
Main agent (focused) --> Sub-agent (file search)
--> Sub-agent (test running)
--> Sub-agent (linting)
See references/production-patterns.md for full practitioner patterns.
| Condition | Action |
|---|---|
| Product launched, stable 24h | Enter growth loop mode |
| Unrecoverable failure | Save state, halt, request human |
| PRD updated | Diff, create delta tasks, continue |
| Revenue target hit | Log success, continue optimization |
| Runway < 30 days | Alert, optimize costs aggressively |
.loki/
+-- CONTINUITY.md # Working memory (read/update every turn)
+-- specs/
| +-- openapi.yaml # API spec - source of truth
+-- queue/
| +-- pending.json # Tasks waiting to be claimed
| +-- in-progress.json # Currently executing tasks
| +-- completed.json # Finished tasks
| +-- dead-letter.json # Failed tasks for review
+-- state/
| +-- orchestrator.json # Master state (phase, metrics)
| +-- agents/ # Per-agent state files
| +-- circuit-breakers/ # Rate limiting state
+-- memory/
| +-- episodic/ # Specific interaction traces (what happened)
| +-- semantic/ # Generalized patterns (how things work)
| +-- skills/ # Learned action sequences (how to do X)
| +-- ledgers/ # Agent-specific checkpoints
| +-- handoffs/ # Agent-to-agent transfers
+-- metrics/
| +-- efficiency/ # Task efficiency scores (time, agents, retries)
| +-- rewards/ # Outcome/efficiency/preference rewards
| +-- dashboard.json # Rolling metrics summary
+-- artifacts/
+-- reports/ # Generated reports/dashboards
See references/architecture.md for full structure and state schemas.
Loki Mode # Start fresh
Loki Mode with PRD at path/to/prd # Start with PRD
Skill Metadata:
| Field | Value |
|---|---|
| Trigger | "Loki Mode" or "Loki Mode with PRD at [path]" |
| Skip When | Need human approval, want to review plan first, single small task |
| Related Skills | subagent-driven-development, executing-plans |
Detailed documentation is split into reference files for progressive loading:
| Reference | Content |
|---|---|
references/core-workflow.md | Full RARV cycle, CONTINUITY.md template, autonomy rules |
references/quality-control.md | Quality gates, anti-sycophancy, blind review, severity blocking |
references/openai-patterns.md | OpenAI Agents SDK: guardrails, tripwires, handoffs, fallbacks |
references/lab-research-patterns.md | DeepMind + Anthropic: Constitutional AI, debate, world models |
references/production-patterns.md | HN 2025: What actually works in production, context engineering |
Version: 2.32.0 | Lines: ~600 | Research-Enhanced: Labs + HN Production Patterns
Weekly Installs
157
Repository
GitHub Stars
23.4K
First Seen
Jan 25, 2026
Security Audits
Gen Agent Trust HubFailSocketWarnSnykWarn
Installed on
opencode138
claude-code133
gemini-cli128
cursor122
codex116
github-copilot113
Azure Data Explorer (Kusto) 查询技能:KQL数据分析、日志遥测与时间序列处理
125,100 周安装
| API spec - source of truth |
| Architecture changes |
CLAUDE.md | Project context - arch & patterns | Significant changes |
.loki/queue/*.json | Task states | Every task change |
Generate -> Critique against principles -> ReviseEpisodic (trace) -> Pattern Extraction -> Semantic (knowledge)High-level planner -> Skill selection -> Local executorClassify Complexity -> Select Agents -> Track Efficiency -> Reward LearningProponent defends -> Opponent challenges -> Synthesizeon_handoff -> Pre-fetch context -> Transfer with data3-5 steps max -> Human review -> ContinueManual selection -> Focused context -> Fresh per taskLLM output -> Rule-based checks -> Retry or approveSimple task -> Direct dispatch | Complex task -> Supervisor orchestrationPlaywright MCP -> Automate browser -> Verify UI features visually| sonnet |
| exhaustive + human checkpoint |
| Use Decision Tree, check orchestrator.json |
| Memory/context growing | Not using ledgers | Write to ledgers after completing tasks |
references/advanced-patterns.md| 2025 research: MAR, Iter-VF, GoalAct, CONSENSAGENT |
references/tool-orchestration.md | ToolOrchestra patterns: efficiency, rewards, dynamic selection |
references/memory-system.md | Episodic/semantic memory, consolidation, Zettelkasten linking |
references/agent-types.md | All 37 agent types with full capabilities |
references/task-queue.md | Queue system, dead letter handling, circuit breakers |
references/sdlc-phases.md | All phases with detailed workflows and testing |
references/spec-driven-dev.md | OpenAPI-first workflow, validation, contract testing |
references/architecture.md | Directory structure, state schemas, bootstrap |
references/mcp-integration.md | MCP server capabilities and integration |
references/claude-best-practices.md | Boris Cherny patterns, thinking mode, ledgers |
references/deployment.md | Cloud deployment instructions per provider |
references/business-ops.md | Business operation workflows |