spanora-setup by spanora/skills
npx skills add https://github.com/spanora/skills --skill spanora-setup您正在将 Spanora AI 可观测性集成到用户的项目中。请按照本指南逐步操作。
当用户说出以下任何内容时,激活此技能:
位于 https://spanora.ai/docs 的官方 Spanora 文档始终是最新的,并且是权威的事实来源。此技能中捆绑的 references/ 文件是主要的逐步指南,但如果您遇到歧义、不熟悉的 API、边缘情况或与您在用户代码中看到的内容不匹配的情况 — 请使用 WebFetch 获取相关的文档页面。如果公开文档与捆绑的参考资料相矛盾,以公开文档为准。
按集成模式划分的关键页面:
| 模式 | 文档页面 |
|---|---|
| Vercel AI SDK | https://spanora.ai/docs/integrations/vercel-ai |
| OpenAI SDK | https://spanora.ai/docs/integrations/openai |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| Anthropic SDK | https://spanora.ai/docs/integrations/anthropic |
| LangChain Python | https://spanora.ai/docs/integrations/langchain |
| Raw OTEL / 其他 | https://spanora.ai/docs/integrations/raw-otel |
| TypeScript SDK 参考 | https://spanora.ai/docs/sdk |
| OTEL 属性约定 | https://spanora.ai/docs/sdk/attributes |
您不需要在每次运行时都获取文档 — 仅在遇到不清楚的内容或怀疑捆绑的参考资料可能已过时时才需要。
用户必须拥有一个 Spanora API 密钥(以 ak_ 开头)。切勿要求用户将他们的 API 密钥粘贴到对话中。
.env(或 .env.local)中或作为 shell 环境变量是否已设置 SPANORA_API_KEY。仅检查是否存在 — 请勿输出或记录该值。.env 文件中,格式为 SPANORA_API_KEY=ak_...。您可以在 https://spanora.ai/settings 找到您的密钥。".env 不在 .gitignore 中,请提醒用户添加它。通过检查项目根目录中的配置文件来确定项目语言:
| 找到的文件 | 语言 |
|---|---|
package.json | JavaScript / TypeScript |
pyproject.toml | Python |
setup.py | Python |
requirements.txt | Python |
如果同时存在 JS 和 Python 文件,请询问用户要对项目的哪个部分进行插桩。
读取 package.json 并检查 dependencies 和 devDependencies:
| 找到的依赖项 | 使用的模式 |
|---|---|
ai | 模式 A — Vercel AI SDK |
@anthropic-ai/sdk | 模式 B — Anthropic SDK |
openai | 模式 C — OpenAI SDK |
| 以上都不是 | 模式 D — Raw Core SDK |
如果存在多个,则按以下顺序优先选择:A > B > C。使用与用户代码实际调用的 SDK 匹配的模式。如果不确定,请询问。
读取 pyproject.toml(或 requirements.txt / setup.py)并检查依赖项:
| 找到的依赖项 | 使用的模式 |
|---|---|
langchain | 模式 E — LangChain / LangGraph |
未来可能会添加更多 Python 模式。如果用户的 Python 项目不使用 LangChain,请告知他们 Spanora 通过原始 OpenTelemetry 支持任何 Python 框架 — 请参考 LangChain 参考资料作为 OTEL 设置的模板。
| 找到的文件 | 包管理器 |
|---|---|
pnpm-lock.yaml | pnpm |
yarn.lock | yarn |
bun.lockb | bun |
package-lock.json | npm |
| 找到的文件 | 包管理器 |
|---|---|
uv.lock | uv |
poetry.lock | poetry |
Pipfile.lock | pipenv |
| 其他情况 | pip |
pnpm add @spanora-ai/sdk
# 或:npm install @spanora-ai/sdk / yarn add @spanora-ai/sdk / bun add @spanora-ai/sdk
pip install opentelemetry-sdk opentelemetry-exporter-otlp opentelemetry-instrumentation-langchain langgraph
# 或:uv add ... / poetry add ... / pipenv install ...
Python 不需要 Spanora SDK — 追踪使用标准的 OpenTelemetry。
根据检测到的模式,阅读相应的参考文件以获取代码示例和 API 用法:
JavaScript / TypeScript:
references/vercel-ai.mdreferences/anthropic.mdreferences/openai.mdreferences/core-sdk.mdPython:
references/langchain-python.md对于 JS/TS 模式,请始终同时阅读 references/common.md 以获取共享模式:init()、shutdown()、工具追踪(trackToolHandler、runTool)、多代理共享上下文、代理命名指南、API 密钥设置以及迁移清单。Python 模式的参考资料文件是自包含的。
将参考文件中的模式应用到用户的代码中。参考文件包含针对 SDK 源代码和集成测试验证过的生产就绪示例。
每次 AI 执行必须至少产生一个追踪。 对于用户代码中的每个 LLM 调用点,使用可用的最高保真度方法:
experimental_telemetry,LangChain 的自动插桩。在可用时首选 — 零手动工作。trackOpenAI、trackAnthropic、trackVercelAI / trackVercelAIStream。当调用点无法使用自动遥测时使用(例如,工具循环代理、自定义代理模式)。trackLlm、trackLlmStream、recordLlm。作为上述方法未覆盖的任何 LLM 调用的备用方案。应用基础集成后,扫描用户的代码,查找任何不会产生跨度的 LLM 调用。如果找到,请使用上述列表中适当的追踪函数包装它。不要留下盲点。
应用基础集成后,向用户提及这些可选功能。默认情况下不要添加它们 — 仅当用户的代码具有相关的上下文可用或用户要求时才包含它们:
track() 调用中的 userId、orgId、agentSessionId。将追踪链接到仪表板中的最终用户、租户和会话。仅当代码可以访问这些值时(例如,从请求上下文、身份验证会话或 API 输入)才添加。operation(trackLlm、trackOpenAI、trackAnthropic、recordLlm)。默认为 "chat"。对于嵌入调用设置为 "embeddings",对于补全调用设置为 "text_completion"。仅当用户的代码进行非聊天 LLM 调用时才相关。字段名称参考:
track() 使用 agent(而不是 agentName)作为代理名称prompt(而不是 promptInput)作为输入提示output(而不是 promptOutput)作为输出文本每个参考文件都有一个"可选增强功能"部分,其中包含这些功能的代码示例。
每周安装次数
82
仓库
GitHub 星标数
1
首次出现
2026年2月17日
安全审计
安装于
mcpjam82
mistral-vibe82
kilo82
claude-code82
junie82
windsurf82
You are integrating Spanora AI observability into the user's project. Follow this guide step by step.
Activate this skill when the user says any of:
The official Spanora documentation at https://spanora.ai/docs is always up to date and is the canonical source of truth. The bundled references/ files in this skill are the primary step-by-step guide, but if you encounter ambiguity, an unfamiliar API, edge cases, or something that doesn't match what you see in the user's code — fetch the relevant doc page using WebFetch. If the public docs contradict a bundled reference, the public docs win.
Key pages by integration pattern:
| Pattern | Doc page |
|---|---|
| Vercel AI SDK | https://spanora.ai/docs/integrations/vercel-ai |
| OpenAI SDK | https://spanora.ai/docs/integrations/openai |
| Anthropic SDK | https://spanora.ai/docs/integrations/anthropic |
| LangChain Python | https://spanora.ai/docs/integrations/langchain |
| Raw OTEL / other | https://spanora.ai/docs/integrations/raw-otel |
| TypeScript SDK reference | https://spanora.ai/docs/sdk |
| OTEL attribute conventions | https://spanora.ai/docs/sdk/attributes |
You do not need to fetch docs on every run — only when something is unclear or you suspect the bundled references may be stale.
The user must have a Spanora API key (starts with ak_). Never ask the user to paste their API key into the conversation.
SPANORA_API_KEY is already set in .env (or .env.local) or as a shell environment variable. Only check for presence — do not output or log the value..env file as SPANORA_API_KEY=ak_.... You can find your key at https://spanora.ai/settings.".env is not in .gitignore, remind the user to add it.Determine the project language by checking for config files in the project root:
| File found | Language |
|---|---|
package.json | JavaScript / TypeScript |
pyproject.toml | Python |
setup.py | Python |
requirements.txt | Python |
If both JS and Python files are present, ask the user which part of the project to instrument.
Read package.json and check dependencies and devDependencies:
| Dependency found | Pattern to use |
|---|---|
ai | Pattern A — Vercel AI SDK |
@anthropic-ai/sdk | Pattern B — Anthropic SDK |
openai | Pattern C — OpenAI SDK |
| None of the above | Pattern D — Raw Core SDK |
If multiple are present, prefer in order: A > B > C. Use the pattern matching the SDK the user's code actually calls. If unsure, ask.
Read pyproject.toml (or requirements.txt / setup.py) and check dependencies:
| Dependency found | Pattern to use |
|---|---|
langchain | Pattern E — LangChain / LangGraph |
More Python patterns may be added in the future. If the user's Python project does not use LangChain, inform them that Spanora supports any Python framework via raw OpenTelemetry — refer them to the LangChain reference as a template for OTEL setup.
| File found | Package manager |
|---|---|
pnpm-lock.yaml | pnpm |
yarn.lock | yarn |
bun.lockb | bun |
package-lock.json | npm |
| File found | Package manager |
|---|---|
uv.lock | uv |
poetry.lock | poetry |
Pipfile.lock | pipenv |
| Otherwise | pip |
pnpm add @spanora-ai/sdk
# or: npm install @spanora-ai/sdk / yarn add @spanora-ai/sdk / bun add @spanora-ai/sdk
pip install opentelemetry-sdk opentelemetry-exporter-otlp opentelemetry-instrumentation-langchain langgraph
# or: uv add ... / poetry add ... / pipenv install ...
No Spanora SDK is needed for Python — tracing uses standard OpenTelemetry.
Based on the detected pattern, read the corresponding reference file for code examples and API usage:
JavaScript / TypeScript:
references/vercel-ai.mdreferences/anthropic.mdreferences/openai.mdreferences/core-sdk.mdPython:
references/langchain-python.mdFor JS/TS patterns, always also readreferences/common.md for shared patterns: init(), shutdown(), tool tracking (trackToolHandler, runTool), multi-agent shared context, agent naming guidance, API key setup, and the migration checklist. Python patterns are self-contained in their reference file.
Apply the patterns from the reference files to the user's code. The reference files contain production-ready examples verified against the SDK source and integration tests.
Every AI execution must produce at least one trace. For each LLM call site in the user's code, use the highest-fidelity approach available:
experimental_telemetry for Vercel AI SDK, auto-instrumentation for LangChain. Preferred when available — zero manual work.trackOpenAI, trackAnthropic, trackVercelAI / trackVercelAIStream. Use when auto-telemetry is unavailable for a call site (e.g. tool-loop agents, custom agent patterns).trackLlm, trackLlmStream, recordLlm. Fallback for any LLM call not covered by the above.After applying the base integration, scan the user's code for any LLM call that would not produce a span. If found, wrap it with the appropriate tracking function from the list above. Do not leave blind spots.
After applying the base integration, mention these optional features to the user. Do not add them by default — only include them if the user's code has the relevant context available or the user asks for them:
userId, orgId, agentSessionId on track() calls. Links traces to end users, tenants, and sessions in the dashboard. Only add if the code has access to these values (e.g. from a request context, auth session, or API input).operation on LLM meta (trackLlm, trackOpenAI, trackAnthropic, recordLlm). Defaults to "chat". Set to for embedding calls or for completion calls. Only relevant when the user's code makes non-chat LLM calls.Field name reference:
track() uses agent (not agentName) for the agent nameprompt (not promptInput) for the input promptoutput (not promptOutput) for the output textEach reference file has an "Optional Enrichments" section with code examples for these features.
Weekly Installs
82
Repository
GitHub Stars
1
First Seen
Feb 17, 2026
Security Audits
Gen Agent Trust HubWarnSocketPassSnykFail
Installed on
mcpjam82
mistral-vibe82
kilo82
claude-code82
junie82
windsurf82
阿里云CDN OpenAPI自动化操作指南 - 域名管理、缓存刷新、HTTPS证书配置
129 周安装
pnpm 完全指南:JavaScript/TypeScript 包管理器安装、配置与最佳实践
134 周安装
FAISS 向量相似性搜索库 - Meta AI 十亿级向量快速检索,支持 GPU 加速与 Python 集成
130 周安装
Figma设计插件开发与组件系统指南:从自动布局到设计系统管理
130 周安装
tuzi-post-to-x:绕过反机器人检测,自动化发布内容到X/Twitter
70 周安装
MLflow 机器学习生命周期管理平台:实验跟踪、模型注册与部署
131 周安装
"embeddings""text_completion"