trigger-agents by triggerdotdev/skills
npx skills add https://github.com/triggerdotdev/skills --skill trigger-agents利用 Trigger.dev 的持久化执行功能,构建生产就绪的 AI 智能体。
Need to... → Use
─────────────────────────────────────────────────────
Process items in parallel → Parallelization
Route to different models/handlers → Routing
Chain steps with validation gates → Prompt Chaining
Coordinate multiple specialized tasks → Orchestrator-Workers
Self-improve until quality threshold → Evaluator-Optimizer
Pause for human approval → Human-in-the-Loop (waitpoints.md)
Stream progress to frontend → Realtime Streams (streaming.md)
Let LLM call your tasks as tools → ai.tool (ai-tool.md)
在步骤之间进行验证,链式调用 LLM。如果中间输出不佳,则提前失败。
import { task } from "@trigger.dev/sdk";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
export const translateCopy = task({
id: "translate-copy",
run: async ({ text, targetLanguage, maxWords }) => {
// 步骤 1: 生成
const draft = await generateText({
model: openai("gpt-4o"),
prompt: `Write marketing copy about: ${text}`,
});
// 验证门: 在继续之前验证
const wordCount = draft.text.split(/\s+/).length;
if (wordCount > maxWords) {
throw new Error(`Draft too long: ${wordCount} > ${maxWords}`);
}
// 步骤 2: 翻译(仅在通过验证门后执行)
const translated = await generateText({
model: openai("gpt-4o"),
prompt: `Translate to ${targetLanguage}: ${draft.text}`,
});
return { draft: draft.text, translated: translated.text };
},
});
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
使用成本较低的模型进行分类,然后路由到适当的处理程序。
import { task } from "@trigger.dev/sdk";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
const routingSchema = z.object({
model: z.enum(["gpt-4o", "o1-mini"]),
reason: z.string(),
});
export const routeQuestion = task({
id: "route-question",
run: async ({ question }) => {
// 低成本分类调用
const routing = await generateText({
model: openai("gpt-4o-mini"),
messages: [
{
role: "system",
content: `Classify question complexity. Return JSON: {"model": "gpt-4o" | "o1-mini", "reason": "..."}
- gpt-4o: simple factual questions
- o1-mini: complex reasoning, math, code`,
},
{ role: "user", content: question },
],
});
const { model } = routingSchema.parse(JSON.parse(routing.text));
// 路由到选定的模型
const answer = await generateText({
model: openai(model),
prompt: question,
});
return { answer: answer.text, routedTo: model };
},
});
使用 batch.triggerByTaskAndWait 同时运行独立的 LLM 调用。
import { batch, task } from "@trigger.dev/sdk";
export const analyzeContent = task({
id: "analyze-content",
run: async ({ text }) => {
// 所有三个任务并行运行
const { runs: [sentiment, summary, moderation] } = await batch.triggerByTaskAndWait([
{ task: analyzeSentiment, payload: { text } },
{ task: summarizeText, payload: { text } },
{ task: moderateContent, payload: { text } },
]);
// 首先检查审核结果
if (moderation.ok && moderation.output.flagged) {
return { error: "Content flagged", reason: moderation.output.reason };
}
return {
sentiment: sentiment.ok ? sentiment.output : null,
summary: summary.ok ? summary.output : null,
};
},
});
参见: references/orchestration.md 获取高级模式
协调器提取工作项,扇出给工作器,然后聚合结果。
import { batch, task } from "@trigger.dev/sdk";
export const factChecker = task({
id: "fact-checker",
run: async ({ article }) => {
// 步骤 1: 提取声明(顺序执行 - 需要先获得输出)
const { runs: [extractResult] } = await batch.triggerByTaskAndWait([
{ task: extractClaims, payload: { article } },
]);
if (!extractResult.ok) throw new Error("Failed to extract claims");
const claims = extractResult.output;
// 步骤 2: 扇出 - 并行验证所有声明
const { runs } = await batch.triggerByTaskAndWait(
claims.map(claim => ({ task: verifyClaim, payload: claim }))
);
// 步骤 3: 扇入 - 聚合结果
const verified = runs
.filter((r): r is typeof r & { ok: true } => r.ok)
.map(r => r.output);
return { claims, verifications: verified };
},
});
生成 → 评估 → 根据反馈重试,直到获得批准。
import { task } from "@trigger.dev/sdk";
export const refineTranslation = task({
id: "refine-translation",
run: async ({ text, targetLanguage, feedback, attempt = 0 }) => {
// 退出条件
if (attempt >= 5) {
return { text, status: "MAX_ATTEMPTS", attempts: attempt };
}
// 生成(如果是重试,则包含反馈)
const prompt = feedback
? `Improve this translation based on feedback:\n${feedback}\n\nOriginal: ${text}`
: `Translate to ${targetLanguage}: ${text}`;
const translation = await generateText({
model: openai("gpt-4o"),
prompt,
});
// 评估
const evaluation = await generateText({
model: openai("gpt-4o"),
prompt: `Evaluate translation quality. Reply APPROVED or provide specific feedback:\n${translation.text}`,
});
if (evaluation.text.includes("APPROVED")) {
return { text: translation.text, status: "APPROVED", attempts: attempt + 1 };
}
// 使用反馈进行递归自调用
return refineTranslation.triggerAndWait({
text,
targetLanguage,
feedback: evaluation.text,
attempt: attempt + 1,
}).unwrap();
},
});
| 功能 | 实现的能力 | 参考文档 |
|---|---|---|
| Waitpoints | 人工审批门,外部回调 | references/waitpoints.md |
| Streams | 实时进度推送到前端 | references/streaming.md |
| ai.tool | 让 LLM 将你的任务作为工具调用 | references/ai-tool.md |
| batch.triggerByTaskAndWait | 类型安全的并行执行 | references/orchestration.md |
const { runs } = await batch.triggerByTaskAndWait([...]);
// 检查单个结果
for (const run of runs) {
if (run.ok) {
console.log(run.output); // 类型化输出
} else {
console.error(run.error); // 错误详情
console.log(run.taskIdentifier); // 哪个任务失败了
}
}
// 或者按任务类型过滤
const verifications = runs
.filter((r): r is typeof r & { ok: true } =>
r.ok && r.taskIdentifier === "verify-claim"
)
.map(r => r.output);
// 触发并等待结果
const result = await myTask.triggerAndWait(payload);
if (result.ok) console.log(result.output);
// 批量触发同一任务
const results = await myTask.batchTriggerAndWait([
{ payload: item1 },
{ payload: item2 },
]);
// 批量触发不同任务(类型安全)
const { runs } = await batch.triggerByTaskAndWait([
{ task: taskA, payload: { foo: 1 } },
{ task: taskB, payload: { bar: "x" } },
]);
// 使用 unwrap 进行自我递归
return myTask.triggerAndWait(newPayload).unwrap();
每周安装量
755
代码仓库
GitHub Stars
18
首次出现
Jan 28, 2026
安全审计
安装于
codex697
opencode691
gemini-cli686
github-copilot674
kimi-cli648
amp648
Build production-ready AI agents using Trigger.dev's durable execution.
Need to... → Use
─────────────────────────────────────────────────────
Process items in parallel → Parallelization
Route to different models/handlers → Routing
Chain steps with validation gates → Prompt Chaining
Coordinate multiple specialized tasks → Orchestrator-Workers
Self-improve until quality threshold → Evaluator-Optimizer
Pause for human approval → Human-in-the-Loop (waitpoints.md)
Stream progress to frontend → Realtime Streams (streaming.md)
Let LLM call your tasks as tools → ai.tool (ai-tool.md)
Chain LLM calls with validation between steps. Fail early if intermediate output is bad.
import { task } from "@trigger.dev/sdk";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
export const translateCopy = task({
id: "translate-copy",
run: async ({ text, targetLanguage, maxWords }) => {
// Step 1: Generate
const draft = await generateText({
model: openai("gpt-4o"),
prompt: `Write marketing copy about: ${text}`,
});
// Gate: Validate before continuing
const wordCount = draft.text.split(/\s+/).length;
if (wordCount > maxWords) {
throw new Error(`Draft too long: ${wordCount} > ${maxWords}`);
}
// Step 2: Translate (only if gate passed)
const translated = await generateText({
model: openai("gpt-4o"),
prompt: `Translate to ${targetLanguage}: ${draft.text}`,
});
return { draft: draft.text, translated: translated.text };
},
});
Use a cheap model to classify, then route to appropriate handler.
import { task } from "@trigger.dev/sdk";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
const routingSchema = z.object({
model: z.enum(["gpt-4o", "o1-mini"]),
reason: z.string(),
});
export const routeQuestion = task({
id: "route-question",
run: async ({ question }) => {
// Cheap classification call
const routing = await generateText({
model: openai("gpt-4o-mini"),
messages: [
{
role: "system",
content: `Classify question complexity. Return JSON: {"model": "gpt-4o" | "o1-mini", "reason": "..."}
- gpt-4o: simple factual questions
- o1-mini: complex reasoning, math, code`,
},
{ role: "user", content: question },
],
});
const { model } = routingSchema.parse(JSON.parse(routing.text));
// Route to selected model
const answer = await generateText({
model: openai(model),
prompt: question,
});
return { answer: answer.text, routedTo: model };
},
});
Run independent LLM calls simultaneously with batch.triggerByTaskAndWait.
import { batch, task } from "@trigger.dev/sdk";
export const analyzeContent = task({
id: "analyze-content",
run: async ({ text }) => {
// All three run in parallel
const { runs: [sentiment, summary, moderation] } = await batch.triggerByTaskAndWait([
{ task: analyzeSentiment, payload: { text } },
{ task: summarizeText, payload: { text } },
{ task: moderateContent, payload: { text } },
]);
// Check moderation first
if (moderation.ok && moderation.output.flagged) {
return { error: "Content flagged", reason: moderation.output.reason };
}
return {
sentiment: sentiment.ok ? sentiment.output : null,
summary: summary.ok ? summary.output : null,
};
},
});
See: references/orchestration.md for advanced patterns
Orchestrator extracts work items, fans out to workers, aggregates results.
import { batch, task } from "@trigger.dev/sdk";
export const factChecker = task({
id: "fact-checker",
run: async ({ article }) => {
// Step 1: Extract claims (sequential - need output first)
const { runs: [extractResult] } = await batch.triggerByTaskAndWait([
{ task: extractClaims, payload: { article } },
]);
if (!extractResult.ok) throw new Error("Failed to extract claims");
const claims = extractResult.output;
// Step 2: Fan-out - verify all claims in parallel
const { runs } = await batch.triggerByTaskAndWait(
claims.map(claim => ({ task: verifyClaim, payload: claim }))
);
// Step 3: Fan-in - aggregate results
const verified = runs
.filter((r): r is typeof r & { ok: true } => r.ok)
.map(r => r.output);
return { claims, verifications: verified };
},
});
Generate → Evaluate → Retry with feedback until approved.
import { task } from "@trigger.dev/sdk";
export const refineTranslation = task({
id: "refine-translation",
run: async ({ text, targetLanguage, feedback, attempt = 0 }) => {
// Bail condition
if (attempt >= 5) {
return { text, status: "MAX_ATTEMPTS", attempts: attempt };
}
// Generate (with feedback if retrying)
const prompt = feedback
? `Improve this translation based on feedback:\n${feedback}\n\nOriginal: ${text}`
: `Translate to ${targetLanguage}: ${text}`;
const translation = await generateText({
model: openai("gpt-4o"),
prompt,
});
// Evaluate
const evaluation = await generateText({
model: openai("gpt-4o"),
prompt: `Evaluate translation quality. Reply APPROVED or provide specific feedback:\n${translation.text}`,
});
if (evaluation.text.includes("APPROVED")) {
return { text: translation.text, status: "APPROVED", attempts: attempt + 1 };
}
// Recursive self-call with feedback
return refineTranslation.triggerAndWait({
text,
targetLanguage,
feedback: evaluation.text,
attempt: attempt + 1,
}).unwrap();
},
});
| Feature | What it enables | Reference |
|---|---|---|
| Waitpoints | Human approval gates, external callbacks | references/waitpoints.md |
| Streams | Real-time progress to frontend | references/streaming.md |
| ai.tool | Let LLMs call your tasks as tools | references/ai-tool.md |
| batch.triggerByTaskAndWait | Typed parallel execution | references/orchestration.md |
const { runs } = await batch.triggerByTaskAndWait([...]);
// Check individual results
for (const run of runs) {
if (run.ok) {
console.log(run.output); // Typed output
} else {
console.error(run.error); // Error details
console.log(run.taskIdentifier); // Which task failed
}
}
// Or filter by task type
const verifications = runs
.filter((r): r is typeof r & { ok: true } =>
r.ok && r.taskIdentifier === "verify-claim"
)
.map(r => r.output);
// Trigger and wait for result
const result = await myTask.triggerAndWait(payload);
if (result.ok) console.log(result.output);
// Batch trigger same task
const results = await myTask.batchTriggerAndWait([
{ payload: item1 },
{ payload: item2 },
]);
// Batch trigger different tasks (typed)
const { runs } = await batch.triggerByTaskAndWait([
{ task: taskA, payload: { foo: 1 } },
{ task: taskB, payload: { bar: "x" } },
]);
// Self-recursion with unwrap
return myTask.triggerAndWait(newPayload).unwrap();
Weekly Installs
755
Repository
GitHub Stars
18
First Seen
Jan 28, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
codex697
opencode691
gemini-cli686
github-copilot674
kimi-cli648
amp648
agent-browser 浏览器自动化工具 - Vercel Labs 命令行网页操作与测试
136,300 周安装
YouTube视频分析师 - 逆向分析病毒内容公式,提取钩子、留存机制与情感触发点
647 周安装
SQLiteData 使用指南:SwiftData 轻量级替代方案,支持 CloudKit 同步
CTF密码学挑战速查指南 | 经典/现代密码攻击、RSA/ECC/流密码实战技巧
648 周安装
Bitrefill CLI:让AI智能体自主购买数字商品,支持加密货币支付
Bilibili 字幕提取工具 - 支持 AI 字幕检测与 ASR 转录,一键下载视频字幕
648 周安装
assistant-ui thread-list 线程列表:管理多聊天线程的 React AI SDK 组件
649 周安装