npx skills add https://github.com/a5c-ai/babysitter --skill babysit通过迭代执行来编排 .a5c/runs/<runId>/。使用 SDK CLI 来驱动编排循环。
确保您拥有最新版本的 CLI:
npm i -g @a5c-ai/babysitter@latest @a5c-ai/babysitter-sdk@latest @a5c-ai/babysitter-breakpoints@latest
然后使用 CLI 别名:CLI="babysitter"
或者,使用 CLI 别名: CLI="npx -y @a5c-ai/babysitter-sdk@latest"
确保已安装 jq 并在路径中可用。如果未安装,请安装它。
babysitter 工作流包含 4 个步骤:
使用 AskUserQuestion 工具(如果在非交互模式下运行,则使用断点)向用户询问意图、需求、目标、范围等 -(在设置会话内循环之前)。
一个多步骤阶段,旨在理解意图和视角,以便在研究仓库、必要时进行简短的在线研究、在目标仓库中进行简短研究、获取额外说明、意图和库(流程、专业领域、技能、子代理、方法论、参考资料等)/ 方法论构建指南之后,进行流程构建。(澄清有关意图、需求、目标、范围等的问题) - 库位于 [skill-root]/process/specializations/ /** 和 [skill-root]/process/methodologies/
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
第一步应该是查看仓库的状态,然后找到最相关的流程、专业领域、技能、子代理、方法论、参考资料等作为参考。
然后,此阶段可以包含:在线研究、研究仓库、用户提问以及其他步骤,一个接一个地进行,直到意图、需求、目标、范围等变得清晰,并且用户对理解感到满意。在每个步骤之后,决定下一步要采取的类型。在此阶段,不要提前规划超过 1 个步骤。并且同一类型的步骤可以在此阶段多次使用。
访谈阶段之后,根据流程创建指南和方法论部分,为运行创建完整的自定义流程文件(js 和 jsons)。同时,如果尚未安装,请在 .a5c 内安装 babysitter-sdk。(如果尚未安装,请将其安装在 .a5c/package.json 中。确保)您必须遵守流程库中流程文件的语法和结构。
对于新运行:
bash "${CLAUDE_PLUGIN_ROOT}/skills/babysit/scripts/setup-babysitter-run.sh" --claude-session-id "${CLAUDE_SESSION_ID}" $PROMPT
$CLI run:create --process-id <id> --entry <path> --inputs <file> --json
bash "${CLAUDE_PLUGIN_ROOT}/skills/babysit/scripts/associate-session-with-run.sh" \
--run-id <runId-from-step-2> \
--claude-session-id "${CLAUDE_SESSION_ID}"
对于恢复现有运行:
bash "${CLAUDE_PLUGIN_ROOT}/skills/babysit/scripts/setup-babysitter-run-resume.sh" --claude-session-id "${CLAUDE_SESSION_ID}" --run-id RUN_ID"
$CLI run:iterate .a5c/runs/<runId> --json --iteration <n>
输出:
{
"iteration": 1,
"status": "executed|waiting|completed|failed|none",
"action": "executed-tasks|waiting|none",
"reason": "auto-runnable-tasks|breakpoint-waiting|terminal-state",
"count": 3,
"completionSecret": "only-present-when-completed",
"metadata": { "runId": "...", "processId": "..." }
}
状态值:
"executed" - 任务已执行,继续循环"waiting" - 断点/休眠,暂停直到释放"completed" - 运行成功完成"failed" - 运行失败并出现错误"none" - 没有待处理的效果$CLI task:list .a5c/runs/<runId> --pending --json
输出:
{
"tasks": [
{
"effectId": "effect-abc123",
"kind": "node|agent|skill|breakpoint",
"label": "auto",
"status": "requested"
}
]
}
在 SDK 外部运行效果(由您、您的钩子或另一个工作程序执行)。执行后(通过委托给代理或技能),通过调用 task:post 将结果摘要提交到运行中,该命令会:
tasks/<effectId>/result.jsonEFFECT_RESOLVED 事件追加到日志中重要事项:
如果在交互模式下运行,使用 AskUserQuestion 工具向用户提问并获取答案。
然后通过调用 task:post 将断点的结果提交到运行中。
否则:
如果在非交互模式下运行,使用断点创建命令来创建断点并从用户处获取答案:
npx @a5c-ai/babysitter-breakpoints breakpoint create --tag <tag> --question "<question>" --title "<title>" --run-id <runId> --file <file,format,language,label> --file <file,format,language,label> --file <file,format,language,label> ...
以创建断点并从用户处获取答案。断点旨在通过断点工具获得人工批准。切勿直接提示,也切勿自行释放或批准此断点。但当断点被解决时,您可能需要通过调用 task:post 将断点的结果提交到运行中。
重要:请勿直接写入 result.json。该文件由 SDK 管理。
工作流:
output.json 或 value.json):{
"score": 85,
"details": { ... }
}
2. 提交结果,传递值文件:
$CLI task:post .a5c/runs/<runId> <effectId> \
--status ok \
--value tasks/<effectId>/output.json \
--json
task:post 命令将:
result.json(包括模式、元数据和您的值)EFFECT_RESOLVED 事件追加到日志中可用标志:
--status <ok|error> (必需)--value <file> - 结果值(用于 status=ok)--error <file> - 错误负载(用于 status=error)--stdout-file <file> - 捕获标准输出--stderr-file <file> - 捕获标准错误--started-at <iso8601> - 任务开始时间--finished-at <iso8601> - 任务结束时间--metadata <file> - 额外的元数据 JSON需要避免的常见错误:
# ❌ 错误:直接写入 result.json
echo '{"result": {...}}' > tasks/<effectId>/result.json
$CLI task:post <runId> <effectId> --status ok
# ✅ 正确:将值写入单独的文件,让 SDK 创建 result.json
echo '{"score": 85}' > tasks/<effectId>/output.json
$CLI task:post <runId> <effectId> --status ok --value tasks/<effectId>/output.json
通过调用 run:iterate 或执行正确的下一步操作来重复编排循环。
如果您不遵循此步骤,您将被 stop-hook 调用,并被要求重复编排循环或通过提交完成密钥来退出循环。
| 种类 | 描述 | 执行器 |
|---|---|---|
node | Node.js 脚本 | 本地 node 进程 |
shell | Shell 脚本 | 本地 shell 进程 |
agent | LLM 代理 | 代理运行时 |
skill | Claude Code 技能 | 技能系统 |
breakpoint | 人工批准 | UI/CLI |
sleep | 时间门 | 调度器 |
重要:在分配名称之前,检查实际有哪些子代理和代理可用。如果没有,则传递通用子代理。执行代理任务时,使用 Task 工具。切勿使用 Babysitter 技能或代理来执行任务。如果在运行流程之前,项目中未安装子代理或代理,请先安装它。
export const agentTask = defineTask('agent-scorer', (args, taskCtx) => ({
kind: 'agent', // ← 使用 "agent" 而不是 "node"
title: 'Agent scoring',
agent: {
name: 'quality-scorer',
prompt: {
role: 'QA engineer',
task: 'Score results 0-100',
context: { ...args },
instructions: ['Review', 'Score', 'Recommend'],
outputFormat: 'JSON'
},
outputSchema: {
type: 'object',
required: ['score']
}
},
io: {
inputJsonPath: `tasks/${taskCtx.effectId}/input.json`,
outputJsonPath: `tasks/${taskCtx.effectId}/result.json`
}
}));
重要:在分配名称之前,检查实际有哪些技能可用。切勿使用 Babysitter 技能或代理来执行任务。如果在运行流程之前,项目中未安装技能或子代理,请先安装它。
export const skillTask = defineTask('analyzer-skill', (args, taskCtx) => ({
kind: 'skill', // ← 使用 "skill" 而不是 "node"
title: 'Analyze codebase',
skill: {
name: 'codebase-analyzer',
context: {
scope: args.scope,
depth: args.depth,
analysisType: args.type,
criteria: ['Code consistency', 'Naming conventions', 'Error handling'],
instructions: [
'Scan specified paths for code patterns',
'Analyze consistency across the codebase',
'Check naming conventions',
'Review error handling patterns',
'Generate structured analysis report'
]
}
},
io: {
inputJsonPath: `tasks/${taskCtx.effectId}/input.json`,
outputJsonPath: `tasks/${taskCtx.effectId}/result.json`
}
}));
创建运行:
$CLI run:create --process-id <id> --entry <path>#<export> --inputs <path> --run-id <id>
检查状态:
$CLI run:status <runId> --json
当运行完成时,run:iterate 和 run:status 会发出 completionSecret。在 <promise>...</promise> 标签中使用该确切值来结束循环。
查看事件:
$CLI run:events <runId> --limit 20 --reverse
列出任务:
$CLI task:list <runId> --pending --json
提交任务结果:
$CLI task:post <runId> <effectId> --status <ok|error> --json
迭代:
$CLI run:iterate <runId> --json --iteration <n>
如果在任何时候,运行因 SDK 问题、状态损坏或日志损坏而失败。分析错误和日志事件。将状态恢复到最后一个已知的良好状态,并调整并尝试继续运行。
在构建用户体验和全栈应用程序时,将前端的主要页面与为开发过程的每个阶段创建的功能集成/链接起来(在相关的地方)。这样,我们就可以随着开发进展测试应用程序的功能。
除非另有说明,否则在流程中优先选择质量门控的迭代开发循环。
您可以在运行创建后或运行期间更改流程(并相应地调整流程和日志),以防您发现之前未知的新信息或需求,这些信息或需求改变了方法或流程。
流程应该是用户请求的全面且完整的解决方案。它不应该是部分解决方案或进行中的工作。它应该是一个完整且可工作的解决方案,可以用来随着开发进展测试应用程序的功能。
同样包括规划阶段、集成阶段、调试阶段、重构阶段等的验证和细化步骤(和循环)。
使用可用的技能和子代理创建流程。
除非另有说明,否则优先选择在质量门控中关闭最广泛循环的流程(例如,如果是移动或桌面应用程序,则使用完整浏览器或模拟器/虚拟机进行端到端测试)以及确保工作准确符合用户请求的门控(涵盖所有规范,除非用户意图允许,否则不添加额外内容)。
扫描插件和 SDK 包中的方法论和流程,以找到相关流程和方法论作为参考。同时搜索捆绑在活动技能中的流程文件、仓库中的流程(.a5c/processes/)。
如果您遇到流程中通用的、可重用的部分,以后可以重用和组合,请以模块化方式构建它,并将其组织在 .a5c/processes 目录中。并将其导入到当前用户请求的特定流程中进行组合。为了可重用性和组合性,优先选择以这种模块化方式构建流程。
除非另有说明,否则优先选择具有以下特征的流程:
关键规则:完成密钥仅在运行完成时发出。您只能在运行完全且明确地完成时(来自编排 CLI 的 completed 状态)输出 <promise>SECRET</promise>。不要输出虚假的承诺来逃避运行,也不要向用户提及该密钥。
关键规则:在非交互模式下,切勿自行通过 CLI 批准断点。只创建它们并等待。它们总是会在外部被解决。
关键规则:在非交互模式下,如果断点仍在等待中。监控它以等待批准,直到它被解决。不要向用户返回提示。只需等待断点被解决。
关键规则:如果运行损坏/失败/处于未知状态,恢复的方法之一是删除日志中最后的不良条目并重建状态。在交互模式下,如果您需要向用户询问有关恢复的问题,并且您已用尽所有其他选项,请使用 AskUserQuestion 工具。
关键规则:创建流程时,在考虑具体编排之前,先搜索可用的技能和子代理。优先选择在质量门控中关闭最广泛循环的流程(例如,如果是移动或桌面应用程序,则使用完整浏览器或模拟器/虚拟机进行端到端测试)以及确保工作准确符合用户请求的门控(涵盖所有规范,除非用户意图允许,否则不添加额外内容)。
关键规则:不要在委托的任务内部使用 babysit 技能。如果您作为子代理执行委托的任务。当尝试运行设置 shell 脚本时,您将收到错误。这意味着您必须实际自己执行任务,而不是进行编排、babysit 甚至使用此技能。
关键规则:切勿构建包装器、Python 或辅助脚本来编排运行。使用 CLI 来驱动编排循环,切勿以编程方式循环。
关键规则:如果用户激活了此技能,切勿回退到更简单的执行。坚持执行编排本身,而不是通过绕过上述流程来简化执行 - 您必须创建一个有效的流程 js 文件,创建一个运行并将其与会话关联,然后迭代运行直到完成。
process/tdd-quality-convergence.js - TDD 质量收敛示例 - 在为新的运行创建代码流程之前,请阅读并查找相关流程和方法论(使用 CLI 创建运行,然后使用这些流程作为参考)reference/ADVANCED_PATTERNS.md - 代理/技能模式,迭代收敛packages/sdk/sdk.md - SDK API 参考每周安装次数
71
仓库
GitHub 星标数
419
首次出现
2026年2月3日
安装于
opencode68
codex67
gemini-cli65
github-copilot64
kimi-cli61
amp61
Orchestrate .a5c/runs/<runId>/ through iterative execution. Use the SDK CLI to drive the orchestration loop.
make sure you have the latest version of the cli:
npm i -g @a5c-ai/babysitter@latest @a5c-ai/babysitter-sdk@latest @a5c-ai/babysitter-breakpoints@latest
then use the CLI alias: CLI="babysitter"
Alternatively, use the CLI alias: CLI="npx -y @a5c-ai/babysitter-sdk@latest"
make sure you have jq installed and available in the path. if not, install it.
The babysitter workflow has 4 steps:
interview the user for the intent, requirements, goal, scope, etc. using AskUserQuestion tool (or breakpoint if running in non-interactive mode) - (before setting the in-session loop).
a multi-step phase to understand the intent and perspecitve to approach the process building after researching the repo, short research online if needed, short research in the target repo, additional instructions, intent and library (processes, specializations, skills, subagents, methodologies, references, etc.) / guide for methodology building. (clarifications regarding the intent, requirements, goal, scope, etc.) - the library is at [skill-root]/process/specializations// /** and [skill-root]/process/methodologies/
the first step should be the look at the state of the repo, then find the most relevant processes, specializations, skills, subagents, methodologies, references, etc. to use as a reference.
then this phase can have: research online, research the repo, user questions, and other steps one after the other until the intent, requirements, goal, scope, etc. are clear and the user is satisfied with the understanding. after each step, decide the type of next step to take. do not plan more than 1 step ahead in this phase. and the same step type can be used more than once in this phase.
after the interview phase, create the complete custom process files (js and jsons) for the run according to the Process Creation Guidelines and methodologies section. also install the babysitter-sdk inside .a5c if it is not already installed. (install it in .a5c/package.json if it is not already installed. make sure ) you must abide the syntax and structure of the process files from the process library.
For new runs:
bash "${CLAUDE_PLUGIN_ROOT}/skills/babysit/scripts/setup-babysitter-run.sh" --claude-session-id "${CLAUDE_SESSION_ID}" $PROMPT
$CLI run:create --process-id <id> --entry <path> --inputs <file> --json
bash "${CLAUDE_PLUGIN_ROOT}/skills/babysit/scripts/associate-session-with-run.sh" \
--run-id <runId-from-step-2> \
--claude-session-id "${CLAUDE_SESSION_ID}"
For resuming existing runs:
bash "${CLAUDE_PLUGIN_ROOT}/skills/babysit/scripts/setup-babysitter-run-resume.sh" --claude-session-id "${CLAUDE_SESSION_ID}" --run-id RUN_ID"
$CLI run:iterate .a5c/runs/<runId> --json --iteration <n>
Output:
{
"iteration": 1,
"status": "executed|waiting|completed|failed|none",
"action": "executed-tasks|waiting|none",
"reason": "auto-runnable-tasks|breakpoint-waiting|terminal-state",
"count": 3,
"completionSecret": "only-present-when-completed",
"metadata": { "runId": "...", "processId": "..." }
}
Status values:
"executed" - Tasks executed, continue looping"waiting" - Breakpoint/sleep, pause until released"completed" - Run finished successfully"failed" - Run failed with error"none" - No pending effects$CLI task:list .a5c/runs/<runId> --pending --json
Output:
{
"tasks": [
{
"effectId": "effect-abc123",
"kind": "node|agent|skill|breakpoint",
"label": "auto",
"status": "requested"
}
]
}
Run the effect externally to the SDK (by you, your hook, or another worker). After execution (by delegation to an agent or skill), post the outcome summary into the run by calling task:post, which:
tasks/<effectId>/result.jsonEFFECT_RESOLVED event to the journalIMPORTANT:
If running in interactive mode, use AskUserQuestion tool to ask the user the question and get the answer.
then post the result of the breakpoint to the run by calling task:post.
Otherwise:
if running in non-interactive mode, use the breakpoint create command to create the breakpoint and get the answer from the user:
npx @a5c-ai/babysitter-breakpoints breakpoint create --tag <tag> --question "<question>" --title "<title>" --run-id <runId> --file <file,format,language,label> --file <file,format,language,label> --file <file,format,language,label> ...
to create the breakpoint and get the answer from the user. breakpoint are meant for human approval through the breakpoint tool. NEVER prompt directly and never release or approve this breakpoint yourself. but you may need to post the result of the breakpoint to the run by calling task:post when the breakpoint is resolved.
IMPORTANT : Do NOT write result.json directly. The SDK owns that file.
Workflow:
output.json or value.json):{
"score": 85,
"details": { ... }
}
2. Post the result, passing the value file:
$CLI task:post .a5c/runs/<runId> <effectId> \
--status ok \
--value tasks/<effectId>/output.json \
--json
The task:post command will:
result.json (including schema, metadata, and your value)EFFECT_RESOLVED event to the journalAvailable flags:
--status <ok|error> (required)--value <file> - Result value (for status=ok)--error <file> - Error payload (for status=error)--stdout-file <file> - Capture stdout--stderr-file <file> - Capture stderr--started-at <iso8601> - Task start time--finished-at <iso8601> - Task end time--metadata <file> - Additional metadata JSONCommon mistake to avoid:
# ❌ WRONG: Writing result.json directly
echo '{"result": {...}}' > tasks/<effectId>/result.json
$CLI task:post <runId> <effectId> --status ok
# ✅ CORRECT: Write value to separate file, let SDK create result.json
echo '{"score": 85}' > tasks/<effectId>/output.json
$CLI task:post <runId> <effectId> --status ok --value tasks/<effectId>/output.json
Repeat orchestration loop by calling run:iterate or doing the next right thing.
In case you don't follow this step, you will be called by the stop-hook and you will be asked to repeat the orchestration loop or exit the loop by posting the completion secret.
| Kind | Description | Executor |
|---|---|---|
node | Node.js script | Local node process |
shell | Shell script | Local shell process |
agent | LLM agent | Agent runtime |
skill | Claude Code skill | Skill system |
breakpoint | Human approval | UI/CLI |
Important: Check which subagents and agents are actually available before assigning the name. if none, pass the general-purpose subagent. when executing the agent task, use the Task tool. never use the Babysitter skill or agent to execute the task. if the subagent or agent is not installed for the project before running the process, install it first.
export const agentTask = defineTask('agent-scorer', (args, taskCtx) => ({
kind: 'agent', // ← Use "agent" not "node"
title: 'Agent scoring',
agent: {
name: 'quality-scorer',
prompt: {
role: 'QA engineer',
task: 'Score results 0-100',
context: { ...args },
instructions: ['Review', 'Score', 'Recommend'],
outputFormat: 'JSON'
},
outputSchema: {
type: 'object',
required: ['score']
}
},
io: {
inputJsonPath: `tasks/${taskCtx.effectId}/input.json`,
outputJsonPath: `tasks/${taskCtx.effectId}/result.json`
}
}));
Important: Check which skills are actually available before assigning the name. Never use the Babysitter skill or agent to execute the task. if the skill or subagent is not installed for the project before running the process, install it first.
export const skillTask = defineTask('analyzer-skill', (args, taskCtx) => ({
kind: 'skill', // ← Use "skill" not "node"
title: 'Analyze codebase',
skill: {
name: 'codebase-analyzer',
context: {
scope: args.scope,
depth: args.depth,
analysisType: args.type,
criteria: ['Code consistency', 'Naming conventions', 'Error handling'],
instructions: [
'Scan specified paths for code patterns',
'Analyze consistency across the codebase',
'Check naming conventions',
'Review error handling patterns',
'Generate structured analysis report'
]
}
},
io: {
inputJsonPath: `tasks/${taskCtx.effectId}/input.json`,
outputJsonPath: `tasks/${taskCtx.effectId}/result.json`
}
}));
Create run:
$CLI run:create --process-id <id> --entry <path>#<export> --inputs <path> --run-id <id>
Check status:
$CLI run:status <runId> --json
When the run completes, run:iterate and run:status emit completionSecret. Use that exact value in a <promise>...</promise> tag to end the loop.
View events:
$CLI run:events <runId> --limit 20 --reverse
List tasks:
$CLI task:list <runId> --pending --json
Post task result:
$CLI task:post <runId> <effectId> --status <ok|error> --json
Iterate:
$CLI run:iterate <runId> --json --iteration <n>
If at any point the run fails due to SDK issues or corrupted state or journal. analyze the error and the journal events. recover the state to the state and journal to the last known good state and adapt and try to continue the run.
When building ux and full stack applications, integrate/link the main pages of the frontend with functionality created for every phase of the development process (where relevant). so that is a way to test the functionality of the app as we go.
Unless otherwise specified, prefer quality gated iterative development loops in the process.
You can change the process after the run is created or during the run (and adapt the process accordingly and journal accordingly) in case you discovered new information or requirements that were not previously known that changes the approach or the process.
The process should be a comprehensive and complete solution to the user request. it should not be a partial solution or a work in progress. it should be a complete and working solution that can be used to test the functionality of the app as we go.
include verification and refinement steps (and loops) for planning phases and integration phases, debugging phases, refactoring phases, etc. as well.
Create the process with the available skills and subagents.
Unless otherwise specified, prefer processes that close the widest loop in the quality gates (for example e2e tests with a full browser or emulator/vm if it a mobile or desktop app) AND gates that make sure the work is accurate against the user request (all the specs is covered and no extra stuff was added unless permitted by the intent of the user).
Scan the methodologies and processes in the plugin and the sdk package to find relevant processes and methodologies to use as a reference. also search for process files bundled in active skills, processes in the repo (.a5c/processes/).
if you encounter a generic reusable part of a process that can be later reused and composed, build it in a modular way and organize it in the .a5c/processes directory. and import it to compose it to the specific process in the current user request. prefer architecting processes in such modular way for reusability and composition.
prefer processes that have the following characteristics unless otherwise specified:
CRITICAL RULE: The completion secret is emitted only when the run is completed. You may ONLY output <promise>SECRET</promise> when the run is completely and unequivocally DONE (completed status from the orchestration CLI). Do not output false promises to escape the run, and do not mention the secret to the user.
CRITICAL RULE: in non-interactive mode, never approve breakpoints through the CLI by yourself. only create them and wait for them. they will always be resolved externally.
CRITICAL RULE: in non-interactive mode, if a breakpoint is still waiting. monitor it for approval until it is resolved. do not return prompt to the user. just wait for the breakpoint to be resolved.
CRITICAL RULE: if a run is broken/failed/at unknown state, when of the way to recover is to remove last bad entries in the journal and rebuild the state. in interactive mode, use the AskUserQuestion tool if you need to ask the user for a question about the recovery and you exhausted all other options.
CRITICAL RULE: when creating processes, search for available skills and subagents before thinking about the exact orchestration. prefer processes that close the widest loop in the quality gates (for example e2e tests with a full browser or emulator/vm if it a mobile or desktop app) AND gates that make sure the work is accurate against the user request (all the specs is covered and no extra stuff was added unless permitted by the intent of the user).
CRITICAL RULE: do not use the babysit skill inside the delegated tasks. if you are performing a delgated task as a subagent. you will get an error when trying to run the setup shell script. that means you have to actually perform the task yourself and not orchestrate, babysit or even use this skill.
CRITICAL RULE: Never build a wrapper, python or helper scripts to orchestrate the runs. Use the CLI to drive the orchestration loop and never loop programmatically.
CRITICAL RULE: Never fallback to simpler execution if the user activated this skill. persist in executing the orchestration itself rather than simplifying the execution by bypassing the process above - you must create a valid process js file, create a run and associate it with the session, then iterate the run until it is completed.
process/tdd-quality-convergence.js - TDD quality convergence example - read and look for relevant processes and methodolies before creating the code process for a new run (create the run using the CLI, then use these process as a reference)reference/ADVANCED_PATTERNS.md - Agent/skill patterns, iterative convergencepackages/sdk/sdk.md - SDK API referenceWeekly Installs
71
Repository
GitHub Stars
419
First Seen
Feb 3, 2026
Installed on
opencode68
codex67
gemini-cli65
github-copilot64
kimi-cli61
amp61
Azure Data Explorer (Kusto) 查询技能:KQL数据分析、日志遥测与时间序列处理
133,300 周安装
sleep | Time gate | Scheduler |