sentry-setup-ai-monitoring by getsentry/sentry-agent-skills
npx skills add https://github.com/getsentry/sentry-agent-skills --skill sentry-setup-ai-monitoring配置 Sentry 以追踪 LLM 调用、智能体执行、工具使用和令牌消耗。
重要提示: 下面的 SDK 版本、API 名称和代码示例仅供参考。在实施前,请务必根据 docs.sentry.io 进行验证,因为 API 和最低版本可能已更改。
AI 监控需要启用追踪(tracesSampleRate > 0)。
提示词和输出记录会捕获可能包含个人身份信息(PII)的用户内容。 在启用 recordInputs/recordOutputs(JS)或 include_prompts/send_default_pii(Python)之前,请确认:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
询问用户是否要启用提示词/输出捕获。不要默认启用——仅在明确请求或确认后才进行配置。仅在开发环境中使用 tracesSampleRate: 1.0;在生产环境中,请使用较低的值或 tracesSampler 函数。
在配置之前,始终先检测已安装的 AI SDK:
# JavaScript
grep -E '"(openai|@anthropic-ai/sdk|ai|@langchain|@google/genai)"' package.json
# Python
grep -E '(openai|anthropic|langchain|huggingface)' requirements.txt pyproject.toml 2>/dev/null
| 包 | 集成 | 最低 Sentry SDK | 自动? |
|---|---|---|---|
openai | openAIIntegration() | 10.28.0 | 是 |
@anthropic-ai/sdk | anthropicAIIntegration() | 10.28.0 | 是 |
ai (Vercel) | vercelAIIntegration() | 10.6.0 | 是* |
@langchain/* | langChainIntegration() | 10.28.0 | 是 |
@langchain/langgraph | langGraphIntegration() | 10.28.0 | 是 |
@google/genai | googleGenAIIntegration() | 10.28.0 | 是 |
*Vercel AI:Node.js、Cloudflare Workers、Vercel Edge Functions、Bun 需要 10.6.0+。Deno 需要 10.12.0+。需要每次调用都设置 experimental_telemetry。
当 AI 包安装后,集成会自动启用——无需显式注册:
| 包 | 自动? | 备注 |
|---|---|---|
openai | 是 | 包含 OpenAI Agents SDK |
anthropic | 是 | |
langchain / langgraph | 是 | |
huggingface_hub | 是 | |
google-genai | 是 | |
pydantic-ai | 是 | |
litellm | 否 | 需要显式集成 |
mcp (Model Context Protocol) | 是 |
只需确保启用追踪。当 AI 包安装后,集成会自动启用:
Sentry.init({
dsn: "YOUR_DSN",
tracesSampleRate: 1.0, // 生产环境中请降低(例如 0.1)
// OpenAI、Anthropic、Google GenAI、LangChain 集成在 Node.js 中自动启用
});
要进行自定义(例如,启用提示词捕获——请参阅数据捕获警告):
integrations: [
Sentry.openAIIntegration({
// recordInputs: true, // 选择加入:捕获提示词内容(PII)
// recordOutputs: true, // 选择加入:捕获响应内容(PII)
}),
],
在浏览器端代码或 Next.js 元框架应用中,自动检测不可用。需要手动包装客户端:
import OpenAI from "openai";
import * as Sentry from "@sentry/nextjs"; // 或 @sentry/react, @sentry/browser
const openai = Sentry.instrumentOpenAiClient(new OpenAI());
// 像平常一样使用 'openai' 客户端
integrations: [
Sentry.langChainIntegration({
// recordInputs: true, // 选择加入:捕获提示词内容(PII)
// recordOutputs: true, // 选择加入:捕获响应内容(PII)
}),
Sentry.langGraphIntegration({
// recordInputs: true,
// recordOutputs: true,
}),
],
对于 Edge 运行时,添加到 sentry.edge.config.ts:
integrations: [Sentry.vercelAIIntegration()],
每次调用启用遥测:
await generateText({
model: openai("gpt-4o"),
prompt: "Hello",
experimental_telemetry: {
isEnabled: true,
// recordInputs: true, // 选择加入:捕获提示词内容(PII)
// recordOutputs: true, // 选择加入:捕获响应内容(PII)
},
});
集成会自动启用——只需初始化并启用追踪。仅当需要自定义选项时才显式导入:
import sentry_sdk
sentry_sdk.init(
dsn="YOUR_DSN",
traces_sample_rate=1.0, # 生产环境中请降低(例如 0.1)
# send_default_pii=True, # 选择加入:提示词捕获所必需(发送用户 PII)
# 当 AI 包安装后,集成会自动启用。
# 仅当需要自定义(例如,include_prompts)时才显式指定:
# integrations=[OpenAIIntegration(include_prompts=True)],
)
当未检测到支持的 SDK 时使用。
op 值 | 用途 |
|---|---|
gen_ai.request | 单个 LLM 调用 |
gen_ai.invoke_agent | 智能体执行生命周期 |
gen_ai.execute_tool | 工具/函数调用 |
gen_ai.handoff | 智能体间切换 |
await Sentry.startSpan({
op: "gen_ai.request",
name: "LLM request gpt-4o",
attributes: { "gen_ai.request.model": "gpt-4o" },
}, async (span) => {
span.setAttribute("gen_ai.request.messages", JSON.stringify(messages));
const result = await llmClient.complete(prompt);
span.setAttribute("gen_ai.usage.input_tokens", result.inputTokens);
span.setAttribute("gen_ai.usage.output_tokens", result.outputTokens);
return result;
});
| 属性 | 描述 |
|---|---|
gen_ai.request.model | 模型标识符 |
gen_ai.request.messages | JSON 输入消息 |
gen_ai.usage.input_tokens | 输入令牌计数 |
gen_ai.usage.output_tokens | 输出令牌计数 |
gen_ai.agent.name | 智能体标识符 |
gen_ai.tool.name | 工具标识符 |
仅在向用户确认后(请参阅上面的数据捕获警告)才启用提示词/输出捕获。
配置后,进行一次 LLM 调用并检查 Sentry Traces 仪表板。AI 跨度会以 gen_ai.* 操作出现,显示模型、令牌计数和延迟。
| 问题 | 解决方案 |
|---|---|
| 未出现 AI 跨度 | 验证 tracesSampleRate > 0,检查 SDK 版本 |
| 缺少令牌计数 | 某些提供商不返回流式调用的令牌 |
| 未捕获提示词 | 启用 recordInputs/include_prompts |
| Vercel AI 不工作 | 为每次调用添加 experimental_telemetry |
每周安装量
404
仓库
GitHub 星标数
19
首次出现
2026年1月20日
安全审计
安装于
opencode369
codex365
gemini-cli358
github-copilot345
cursor326
claude-code305
Configure Sentry to track LLM calls, agent executions, tool usage, and token consumption.
Important: The SDK versions, API names, and code samples below are examples. Always verify against docs.sentry.io before implementing, as APIs and minimum versions may have changed.
AI monitoring requires tracing enabled (tracesSampleRate > 0).
Prompt and output recording captures user content that is likely PII. Before enabling recordInputs/recordOutputs (JS) or include_prompts/send_default_pii (Python), confirm:
Ask the user whether they want prompt/output capture enabled. Do not enable it by default — configure it only when explicitly requested or confirmed. Use tracesSampleRate: 1.0 only in development; in production, use a lower value or a tracesSampler function.
Always detect installed AI SDKs before configuring:
# JavaScript
grep -E '"(openai|@anthropic-ai/sdk|ai|@langchain|@google/genai)"' package.json
# Python
grep -E '(openai|anthropic|langchain|huggingface)' requirements.txt pyproject.toml 2>/dev/null
| Package | Integration | Min Sentry SDK | Auto? |
|---|---|---|---|
openai | openAIIntegration() | 10.28.0 | Yes |
@anthropic-ai/sdk | anthropicAIIntegration() | 10.28.0 | Yes |
ai (Vercel) | vercelAIIntegration() | 10.6.0 |
*Vercel AI: 10.6.0+ for Node.js, Cloudflare Workers, Vercel Edge Functions, Bun. 10.12.0+ for Deno. Requires experimental_telemetry per-call.
Integrations auto-enable when the AI package is installed — no explicit registration needed:
| Package | Auto? | Notes |
|---|---|---|
openai | Yes | Includes OpenAI Agents SDK |
anthropic | Yes | |
langchain / langgraph | Yes | |
huggingface_hub | Yes | |
google-genai |
Just ensure tracing is enabled. Integrations auto-enable when the AI package is installed:
Sentry.init({
dsn: "YOUR_DSN",
tracesSampleRate: 1.0, // Lower in production (e.g., 0.1)
// OpenAI, Anthropic, Google GenAI, LangChain integrations auto-enable in Node.js
});
To customize (e.g., enable prompt capture — see Data Capture Warning):
integrations: [
Sentry.openAIIntegration({
// recordInputs: true, // Opt-in: captures prompt content (PII)
// recordOutputs: true, // Opt-in: captures response content (PII)
}),
],
In browser-side code or Next.js meta-framework apps, auto-instrumentation is not available. Wrap the client manually:
import OpenAI from "openai";
import * as Sentry from "@sentry/nextjs"; // or @sentry/react, @sentry/browser
const openai = Sentry.instrumentOpenAiClient(new OpenAI());
// Use 'openai' client as normal
integrations: [
Sentry.langChainIntegration({
// recordInputs: true, // Opt-in: captures prompt content (PII)
// recordOutputs: true, // Opt-in: captures response content (PII)
}),
Sentry.langGraphIntegration({
// recordInputs: true,
// recordOutputs: true,
}),
],
Add to sentry.edge.config.ts for Edge runtime:
integrations: [Sentry.vercelAIIntegration()],
Enable telemetry per-call:
await generateText({
model: openai("gpt-4o"),
prompt: "Hello",
experimental_telemetry: {
isEnabled: true,
// recordInputs: true, // Opt-in: captures prompt content (PII)
// recordOutputs: true, // Opt-in: captures response content (PII)
},
});
Integrations auto-enable — just init with tracing. Only add explicit imports to customize options:
import sentry_sdk
sentry_sdk.init(
dsn="YOUR_DSN",
traces_sample_rate=1.0, # Lower in production (e.g., 0.1)
# send_default_pii=True, # Opt-in: required for prompt capture (sends user PII)
# Integrations auto-enable when the AI package is installed.
# Only specify explicitly to customize (e.g., include_prompts):
# integrations=[OpenAIIntegration(include_prompts=True)],
)
Use when no supported SDK is detected.
op Value | Purpose |
|---|---|
gen_ai.request | Individual LLM calls |
gen_ai.invoke_agent | Agent execution lifecycle |
gen_ai.execute_tool | Tool/function calls |
gen_ai.handoff | Agent-to-agent transitions |
await Sentry.startSpan({
op: "gen_ai.request",
name: "LLM request gpt-4o",
attributes: { "gen_ai.request.model": "gpt-4o" },
}, async (span) => {
span.setAttribute("gen_ai.request.messages", JSON.stringify(messages));
const result = await llmClient.complete(prompt);
span.setAttribute("gen_ai.usage.input_tokens", result.inputTokens);
span.setAttribute("gen_ai.usage.output_tokens", result.outputTokens);
return result;
});
| Attribute | Description |
|---|---|
gen_ai.request.model | Model identifier |
gen_ai.request.messages | JSON input messages |
gen_ai.usage.input_tokens | Input token count |
gen_ai.usage.output_tokens | Output token count |
gen_ai.agent.name | Agent identifier |
gen_ai.tool.name |
Enable prompt/output capture only after confirming with the user (see Data Capture Warning above).
After configuring, make an LLM call and check the Sentry Traces dashboard. AI spans appear with gen_ai.* operations showing model, token counts, and latency.
| Issue | Solution |
|---|---|
| AI spans not appearing | Verify tracesSampleRate > 0, check SDK version |
| Token counts missing | Some providers don't return tokens for streaming |
| Prompts not captured | Enable recordInputs/include_prompts |
| Vercel AI not working | Add experimental_telemetry to each call |
Weekly Installs
404
Repository
GitHub Stars
19
First Seen
Jan 20, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode369
codex365
gemini-cli358
github-copilot345
cursor326
claude-code305
Azure Data Explorer (Kusto) 查询技能:KQL数据分析、日志遥测与时间序列处理
100,500 周安装
| Yes* |
@langchain/* | langChainIntegration() | 10.28.0 | Yes |
@langchain/langgraph | langGraphIntegration() | 10.28.0 | Yes |
@google/genai | googleGenAIIntegration() | 10.28.0 | Yes |
| Yes |
pydantic-ai | Yes |
litellm | No | Requires explicit integration |
mcp (Model Context Protocol) | Yes |
| Tool identifier |