openserv-agent-sdk by openserv-labs/skills
npx skills add https://github.com/openserv-labs/skills --skill openserv-agent-sdk使用 TypeScript 为 OpenServ 平台构建和部署自定义 AI 智能体。
OpenServ 智能体是一种运行您的代码并将其暴露在 OpenServ 平台上的服务——因此它可以被工作流、其他智能体或付费调用(例如 x402)触发。平台向您的智能体发送任务;您的智能体运行您的能力(API、工具、文件处理)并返回结果。您不必使用 LLM——例如,它可以是一个只返回数据的静态 API。如果您需要 LLM 推理,您有两个选择:使用 无运行能力(平台为您处理 AI 调用——无需 API 密钥)或使用 generate()(将 LLM 调用委托给平台);或者,使用您自己的 LLM(您有权访问的任何提供商)。
run 处理函数)和 无运行的(仅名称和描述——平台自动处理 AI 调用)。您也可以在可运行能力内部使用 generate() 将 LLM 调用委托给平台。provision() 通过创建钱包并用它注册来自动为您创建一个(该账户在后续运行中会被重用)。调用 provision()(来自 @openserv-labs/client):它会创建或重用钱包,注册智能体,并将 API 密钥和身份验证令牌写入您的环境变量(或者您传递 来直接绑定它们)。在开发中,您可以跳过设置端点 URL;SDK 可以使用内置隧道连接到平台。广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
agent.instancerun(agent)。智能体监听任务,运行您的能力(以及您使用的 LLM,如果您使用的话),并做出响应。详情请参阅 reference.md 和 troubleshooting.md;examples/ 包含完整的可运行代码。run() 函数。可选地定义 inputSchema 和 outputSchema 用于结构化输入/输出。inputSchema 和 run() 函数。generate() 方法 — 从任何可运行能力内部将 LLM 调用委托给平台。无需 API 密钥——平台执行调用并记录使用情况。支持文本和结构化输出。addLogToTask() 和 uploadFile() 等方法将日志和上传内容附加到该任务。参考: reference.md(模式) · troubleshooting.md(常见问题) · examples/(完整示例)
npm install @openserv-labs/sdk @openserv-labs/client zod
注意: 仅当您使用
process()方法进行直接 OpenAI 调用时才需要openai。大多数智能体不需要它——请使用无运行能力或generate()代替。
完整可运行示例请参阅 examples/basic-agent.ts。
模式很简单:
Agentagent.addCapability() 添加能力provision() 在平台注册(传递 agent.instance 以绑定凭证)run(agent) 启动my-agent/
├── src/agent.ts
├── .env
├── .gitignore
├── package.json
└── tsconfig.json
npm init -y && npm pkg set type=module
npm i @openserv-labs/sdk @openserv-labs/client dotenv zod
npm i -D @types/node tsx typescript
注意: 项目必须在
package.json中使用"type": "module"。为本地开发添加一个"dev": "tsx src/agent.ts"脚本。仅当您使用process()方法进行直接 OpenAI 调用时才安装openai。
大多数智能体不需要任何 LLM API 密钥——使用 无运行能力 或 generate(),平台会为您处理 LLM 调用。如果您使用 process() 进行直接 OpenAI 调用,请设置 OPENAI_API_KEY。其余部分由 provision() 填充。
# 仅在使用 process() 进行直接 OpenAI 调用时需要:
# OPENAI_API_KEY=your-openai-key
# ANTHROPIC_API_KEY=your_anthropic_key # 如果直接使用 Claude
# 部署所需(从 OpenServ 平台仪表板获取)
OPENSERV_USER_API_KEY=your-user-api-key
# 由 provision() 自动填充:
WALLET_PRIVATE_KEY=
OPENSERV_API_KEY=
OPENSERV_AUTH_TOKEN=
PORT=7378
# 生产环境:跳过隧道,仅运行 HTTP 服务器
# DISABLE_TUNNEL=true
# 即使设置了 endpointUrl 也强制使用隧道
# FORCE_TUNNEL=true
能力有两种类型:
无运行能力不需要 run 函数——平台自动处理 AI 调用。只需提供名称和描述:
// 最简单形式 — 仅名称 + 描述
agent.addCapability({
name: 'generate_haiku',
description: '根据给定输入生成一首俳句诗(5-7-5 音节)。'
})
// 带有自定义输入模式
agent.addCapability({
name: 'translate',
description: '将文本翻译成目标语言。',
inputSchema: z.object({
text: z.string(),
targetLanguage: z.string()
})
})
// 带有结构化输出
agent.addCapability({
name: 'analyze_sentiment',
description: '分析给定文本的情感。',
outputSchema: z.object({
sentiment: z.enum(['positive', 'negative', 'neutral']),
confidence: z.number().min(0).max(1)
})
})
run 函数 — 平台执行 LLM 调用inputSchema 是可选的 — 如果省略,默认为 z.object({ input: z.string() })outputSchema 是可选的 — 定义它以从平台获取结构化输出完整无运行示例请参阅 examples/haiku-poet-agent.ts。
可运行能力有一个用于自定义逻辑的 run 函数。每个都需要:
name - 唯一标识符
description - 它的作用(帮助 AI 决定何时使用它)
inputSchema - 定义参数的 Zod 模式
run - 返回字符串的函数
agent.addCapability({
name: 'greet',
description: '按名称问候用户',
inputSchema: z.object({ name: z.string() }),
async run({ args }) {
return Hello, ${args.name}!
}
})
基本能力示例请参阅 examples/capability-example.ts。
注意:
schema属性仍然可以作为inputSchema的别名使用,但已弃用。新代码请使用inputSchema。
在能力中访问 this 以使用智能体方法,如 addLogToTask()、uploadFile()、generate() 等。
日志记录和文件上传模式请参阅 examples/capability-with-agent-methods.ts。
generate() — 平台委托的 LLM 调用generate() 方法允许您无需任何 API 密钥即可进行 LLM 调用。平台执行调用并将使用情况记录到工作区。
// 文本生成
const poem = await this.generate({
prompt: `写一首关于 ${args.topic} 的短诗`,
action
})
// 结构化输出(返回经过验证的、符合模式的对象的)
const metadata = await this.generate({
prompt: `为以下内容建议一个标题和 3 个标签:${poem}`,
outputSchema: z.object({
title: z.string(),
tags: z.array(z.string()).length(3)
}),
action
})
// 带有对话历史
const followUp = await this.generate({
prompt: '建议一个相关主题。',
messages, // 来自 run 函数的对话历史
action
})
参数:
prompt (string) — 给 LLM 的提示action (ActionSchema) — 操作上下文(传入您的 run 函数)outputSchema (Zod 模式,可选) — 提供时,返回经过验证的结构化输出messages (数组,可选) — 用于多轮生成的对话历史action 参数是必需的,因为它标识了用于计费的工作区/任务。在可运行能力内部使用它,其中 action 可从 run 函数参数中获得。
await agent.createTask({ workspaceId, assignee, description, body, input, dependencies })
await agent.updateTaskStatus({ workspaceId, taskId, status: 'in-progress' })
await agent.addLogToTask({ workspaceId, taskId, severity: 'info', type: 'text', body: '...' })
await agent.markTaskAsErrored({ workspaceId, taskId, error: 'Something went wrong' })
const task = await agent.getTaskDetail({ workspaceId, taskId })
const tasks = await agent.getTasks({ workspaceId })
const files = await agent.getFiles({ workspaceId })
await agent.uploadFile({ workspaceId, path: 'output.txt', file: 'content', taskIds: [taskId] })
await agent.deleteFile({ workspaceId, fileId })
能力中的 action 参数是一个 联合类型 — task 仅存在于 'do-task' 变体中。在访问 action.task 之前,始终使用类型守卫进行缩小:
async run({ args, action }) {
// action.task 并非在所有 action 类型上都存在 — 您必须先缩小范围
if (action?.type === 'do-task' && action.task) {
const { workspace, task } = action
workspace.id // 工作区 ID
workspace.goal // 工作区目标
task.id // 任务 ID
task.description // 任务描述
task.input // 任务输入
action.me.id // 当前智能体 ID
}
}
不要在类型守卫之前提取 action?.task?.id — TypeScript 会报错 Property 'task' does not exist on type 'ActionSchema'。
provision() 中的 workflow 对象需要两个重要属性:
name (string) - 这将成为 ERC-8004 中的智能体名称。使其精炼、有力且令人难忘——这是用户看到的面向公众的品牌名称。考虑产品发布,而不是变量名。示例:'Crypto Alpha Scanner'、'AI Video Studio'、'Instant Blog Machine'。
goal (string, 必需) - 工作流实现目标的详细描述。必须具有描述性且详尽——简短或模糊的目标将导致 API 调用失败。至少写一个完整的句子来解释工作流的目的。
workflow: { name: 'Haiku Poetry Generator', // 精炼的显示名称 — 用户看到的 ERC-8004 智能体名称 goal: '使用 AI 将任何主题或情感转化为优美的传统 5-7-5 俳句诗', trigger: triggers.x402({ ... }), task: { description: '生成一首关于给定主题的俳句' } }
import { triggers } from '@openserv-labs/client'
triggers.webhook({ waitForCompletion: true, timeout: 600 })
triggers.x402({ name: '...', description: '...', price: '0.01', timeout: 600 })
triggers.cron({ schedule: '0 9 * * *' })
triggers.manual()
重要: 对于 webhook 和 x402 触发器,始终将
timeout设置为至少 600 秒(10 分钟)。智能体通常需要大量时间来处理请求——尤其是在执行研究、内容生成或其他复杂任务时。超时时间过短将导致过早失败。对于具有许多连续步骤的多智能体管道,请考虑 900 秒或更长。
provision() 创建两种类型的凭证。它们 不可互换:
OPENSERV_API_KEY (智能体 API 密钥) — 由 SDK 在接收任务时用于内部身份验证。当您传递 agent.instance 时,由 provision() 自动设置。请勿将此密钥与 PlatformClient 一起使用。WALLET_PRIVATE_KEY / OPENSERV_USER_API_KEY (用户凭证) — 与 PlatformClient 一起使用以进行管理调用(列出任务、调试工作流等)。使用 client.authenticate(walletKey) 进行身份验证,或将 apiKey 传递给构造函数。如果您需要调试任务或检查工作流,请使用钱包身份验证:
const client = new PlatformClient()
await client.authenticate(process.env.WALLET_PRIVATE_KEY)
const tasks = await client.tasks.list({ workflowId: result.workflowId })
关于 401 错误的详细信息,请参阅 troubleshooting.md。
npm run dev
run() 函数自动:
agents-proxy.openserv.ai无需 ngrok 或其他隧道工具 - run() 无缝处理此问题。只需调用 run(agent),您的本地智能体即可被平台访问。
使用单个命令将您的智能体部署到 OpenServ 托管云:
npx @openserv-labs/client deploy [path]
其中 [path] 是包含您智能体代码的目录(默认为当前目录)。
.env 中的 OPENSERV_USER_API_KEY — 您智能体目录中的 .env 文件必须包含 OPENSERV_USER_API_KEY。从 OpenServ 平台仪表板获取此密钥。此密钥是部署命令(以及 PlatformClient 用于管理操作)所必需的。请注意,provision() 本身 不需要 此密钥——它会创建自己的钱包、进行身份验证,并将凭证独立地持久化到 .openserv.json。如果存在,用户 API 密钥在 provision 后也会保存到 .openserv.json。
首先调用 provision() — 在部署之前,provision() 必须至少运行一次。它在平台上注册智能体并将凭证持久化到 .openserv.json。推荐的智能体模板已经在 main() 中的 run(agent) 之前调用了 provision(),因此在本地启动智能体(npm run dev 或 npx tsx src/agent.ts)就足够了。如果您的代码不调用 provision()(例如,您只在自定义脚本中调用 run(agent)),则必须添加显式的 provision() 调用并在部署前运行一次。
1. 在 .env 中设置 OPENSERV_USER_API_KEY
2. 在本地启动期间调用 provision() (npm run dev) — 注册智能体并写入 .openserv.json
3. npx @openserv-labs/client deploy .
当部署到像 Cloud Run 这样的托管提供商时,将 DISABLE_TUNNEL=true 设置为环境变量。这使得 run() 仅启动 HTTP 服务器而不打开 WebSocket 隧道——平台直接在其公共 URL 处访问您的智能体。
await provision({
agent: {
name: 'my-agent',
description: '...',
endpointUrl: 'https://my-agent.example.com' // 生产环境必需
},
workflow: {
name: 'Lightning Service Pro',
goal: '详细描述此工作流的作用 — 要详尽,模糊的目标会导致失败',
trigger: triggers.webhook({ waitForCompletion: true, timeout: 600 }),
task: { description: '处理传入请求' }
}
})
// 当 DISABLE_TUNNEL=true 时,run() 仅启动 HTTP 服务器(无隧道)
await run(agent)
配置后,在链上注册您的智能体,以便通过身份注册表进行发现。
需要在 Base 上有 ETH。 注册会在 Base 主网(链 8453) 上的 ERC-8004 合约上调用
register(),这需要消耗 gas。由provision()创建的钱包初始余额为零。在首次注册尝试之前,请用少量 Base 上的 ETH 为其注资。钱包地址在配置期间记录(Created new wallet: 0x...)。
始终用 try/catch 包装,以便注册失败(例如,未注资的钱包)不会阻止
run(agent)启动。
两个重要模式:
dotenv(而不是 import 'dotenv/config'),以便您可以在 provision() 写入 WALLET_PRIVATE_KEY 后重新加载 .env。provision() 之后调用 dotenv.config({ override: true }),以便在 ERC-8004 注册之前获取新写入的密钥。import dotenv from 'dotenv'
dotenv.config()
import { Agent, run } from '@openserv-labs/sdk'
import { provision, triggers, PlatformClient } from '@openserv-labs/client'
// ... 定义智能体和能力 ...
const result = await provision({
agent: { instance: agent, name: 'my-agent', description: '...' },
workflow: {
name: 'My Service',
goal: '工作流作用的详细描述',
trigger: triggers.x402({ name: 'My Service', description: '...', price: '0.01', timeout: 600 }),
task: { description: '处理请求' }
}
})
// 重新加载 .env 以获取由 provision() 写入的 WALLET_PRIVATE_KEY
dotenv.config({ override: true })
// 链上注册(非阻塞 — 需要在 Base 上有注资的钱包)
try {
const client = new PlatformClient()
await client.authenticate(process.env.WALLET_PRIVATE_KEY)
const erc8004 = await client.erc8004.registerOnChain({
workflowId: result.workflowId,
privateKey: process.env.WALLET_PRIVATE_KEY!,
name: 'My Service',
description: '此智能体的作用'
})
console.log(`Agent ID: ${erc8004.agentId}`) // "8453:42"
console.log(`TX: ${erc8004.blockExplorerUrl}`)
console.log(`Scan: ${erc8004.scanUrl}`) // "https://www.8004scan.io/agents/base/42"
} catch (error) {
console.warn('ERC-8004 注册已跳过:', error instanceof Error ? error.message : error)
}
await run(agent)
chainId / rpcUrl。完整的 ERC-8004 API 参考和故障排除,请参阅 openserv-client 技能。
this.process() — 需要 OpenAI API 密钥的遗留方法。请改用 this.generate()(平台委托,无需密钥),或使用无运行能力doTask 覆盖 — SDK 自动处理任务执行this.completeTask() — 任务完成由 Runtime API 处理此技能可能已过时。OpenServ 团队定期发布更新——新功能、API 更改和错误修复在您更新之前不会反映在此处。如果某些功能未按预期工作,或者您想确保拥有最新信息,请在继续之前更新技能。
# 检查是否有可用更新
npx skills check
# 将所有已安装技能更新到最新版本
npx skills update
或者直接重新安装 OpenServ 技能:
npx skills add openserv-labs/skills
每周安装次数
101
代码仓库
GitHub 星标数
13
首次出现
2026年2月4日
安全审计
安装于
codex88
opencode73
kimi-cli64
gemini-cli64
amp64
github-copilot64
Build and deploy custom AI agents for the OpenServ platform using TypeScript.
An OpenServ agent is a service that runs your code and exposes it on the OpenServ platform—so it can be triggered by workflows, other agents, or paid calls (e.g. x402). The platform sends tasks to your agent; your agent runs your capabilities (APIs, tools, file handling) and returns results. You don't have to use an LLM—e.g. it could be a static API that just returns data. If you need LLM reasoning, you have two options: use runless capabilities (the platform handles the AI call for you—no API key needed) or use generate() (delegates the LLM call to the platform); alternatively, bring your own LLM (any provider you have access to).
run handler) and runless (just a name and description—the platform handles the AI call automatically). You can also use generate() inside runnable capabilities to delegate LLM calls to the platform.provision() create one for you automatically by creating a wallet and signing up with it (that account is reused on later runs). Call provision() (from @openserv-labs/client): it creates or reuses a wallet, registers the agent, and writes API key and auth token into your env (or you pass agent.instance to bind them directly). In development you can skip setting an endpoint URL; the SDK can use a built-in tunnel to the platform.run(agent). The agent listens for tasks, runs your capabilities (and your LLM if you use one), and responds. Use reference.md and troubleshooting.md for details; examples/ has full runnable code.run() function needed. Optionally define inputSchema and outputSchema for structured I/O.inputSchema, and run() function.generate() method — Delegate LLM calls to the platform from inside any runnable capability. No API key needed—the platform performs the call and records usage. Supports text and structured output.addLogToTask() and uploadFile().Reference: reference.md (patterns) · troubleshooting.md (common issues) · examples/ (full examples)
npm install @openserv-labs/sdk @openserv-labs/client zod
Note:
openaiis only needed if you use theprocess()method for direct OpenAI calls. Most agents don't need it—use runless capabilities orgenerate()instead.
See examples/basic-agent.ts for a complete runnable example.
The pattern is simple:
Agent with a system promptagent.addCapability()provision() to register on the platform (pass agent.instance to bind credentials)run(agent) to startmy-agent/
├── src/agent.ts
├── .env
├── .gitignore
├── package.json
└── tsconfig.json
npm init -y && npm pkg set type=module
npm i @openserv-labs/sdk @openserv-labs/client dotenv zod
npm i -D @types/node tsx typescript
Note: The project must use
"type": "module"inpackage.json. Add a"dev": "tsx src/agent.ts"script for local development. Only installopenaiif you use theprocess()method for direct OpenAI calls.
Most agents don't need any LLM API key—use runless capabilities or generate() and the platform handles LLM calls for you. If you use process() for direct OpenAI calls, set OPENAI_API_KEY. The rest is filled by provision().
# Only needed if you use process() for direct OpenAI calls:
# OPENAI_API_KEY=your-openai-key
# ANTHROPIC_API_KEY=your_anthropic_key # If using Claude directly
# Required for deploy (get from OpenServ platform dashboard)
OPENSERV_USER_API_KEY=your-user-api-key
# Auto-populated by provision():
WALLET_PRIVATE_KEY=
OPENSERV_API_KEY=
OPENSERV_AUTH_TOKEN=
PORT=7378
# Production: skip tunnel and run HTTP server only
# DISABLE_TUNNEL=true
# Force tunnel even when endpointUrl is set
# FORCE_TUNNEL=true
Capabilities come in two flavors:
Runless capabilities don't need a run function—the platform handles the AI call automatically. Just provide a name and description:
// Simplest form — just name + description
agent.addCapability({
name: 'generate_haiku',
description: 'Generate a haiku poem (5-7-5 syllables) about the given input.'
})
// With custom input schema
agent.addCapability({
name: 'translate',
description: 'Translate text to the target language.',
inputSchema: z.object({
text: z.string(),
targetLanguage: z.string()
})
})
// With structured output
agent.addCapability({
name: 'analyze_sentiment',
description: 'Analyze the sentiment of the given text.',
outputSchema: z.object({
sentiment: z.enum(['positive', 'negative', 'neutral']),
confidence: z.number().min(0).max(1)
})
})
run function — the platform performs the LLM callinputSchema is optional — defaults to z.object({ input: z.string() }) if omittedoutputSchema is optional — define it for structured output from the platformSee examples/haiku-poet-agent.ts for a complete runless example.
Runnable capabilities have a run function for custom logic. Each requires:
name - Unique identifier
description - What it does (helps AI decide when to use it)
inputSchema - Zod schema defining parameters
run - Function returning a string
agent.addCapability({
name: 'greet',
description: 'Greet a user by name',
inputSchema: z.object({ name: z.string() }),
async run({ args }) {
return Hello, ${args.name}!
}
})
See examples/capability-example.ts for basic capabilities.
Note: The
schemaproperty still works as an alias forinputSchemabut is deprecated. UseinputSchemafor new code.
Access this in capabilities to use agent methods like addLogToTask(), uploadFile(), generate(), etc.
See examples/capability-with-agent-methods.ts for logging and file upload patterns.
generate() — Platform-Delegated LLM CallsThe generate() method lets you make LLM calls without any API key. The platform performs the call and records usage to the workspace.
// Text generation
const poem = await this.generate({
prompt: `Write a short poem about ${args.topic}`,
action
})
// Structured output (returns validated object matching the schema)
const metadata = await this.generate({
prompt: `Suggest a title and 3 tags for: ${poem}`,
outputSchema: z.object({
title: z.string(),
tags: z.array(z.string()).length(3)
}),
action
})
// With conversation history
const followUp = await this.generate({
prompt: 'Suggest a related topic.',
messages, // conversation history from run function
action
})
Parameters:
prompt (string) — The prompt for the LLMaction (ActionSchema) — The action context (passed into your run function)outputSchema (Zod schema, optional) — When provided, returns a validated structured outputmessages (array, optional) — Conversation history for multi-turn generationThe action parameter is required because it identifies the workspace/task for billing. Use it inside runnable capabilities where action is available from the run function arguments.
await agent.createTask({ workspaceId, assignee, description, body, input, dependencies })
await agent.updateTaskStatus({ workspaceId, taskId, status: 'in-progress' })
await agent.addLogToTask({ workspaceId, taskId, severity: 'info', type: 'text', body: '...' })
await agent.markTaskAsErrored({ workspaceId, taskId, error: 'Something went wrong' })
const task = await agent.getTaskDetail({ workspaceId, taskId })
const tasks = await agent.getTasks({ workspaceId })
const files = await agent.getFiles({ workspaceId })
await agent.uploadFile({ workspaceId, path: 'output.txt', file: 'content', taskIds: [taskId] })
await agent.deleteFile({ workspaceId, fileId })
The action parameter in capabilities is a union type — task only exists on the 'do-task' variant. Always narrow with a type guard before accessing action.task:
async run({ args, action }) {
// action.task does NOT exist on all action types — you must narrow first
if (action?.type === 'do-task' && action.task) {
const { workspace, task } = action
workspace.id // Workspace ID
workspace.goal // Workspace goal
task.id // Task ID
task.description // Task description
task.input // Task input
action.me.id // Current agent ID
}
}
Do not extract action?.task?.id before the type guard — TypeScript will error with Property 'task' does not exist on type 'ActionSchema'.
The workflow object in provision() requires two important properties:
name (string) - This becomes the agent name in ERC-8004. Make it polished, punchy, and memorable — this is the public-facing brand name users see. Think product launch, not variable name. Examples: 'Crypto Alpha Scanner', 'AI Video Studio', 'Instant Blog Machine'.
goal (string, required) - A detailed description of what the workflow accomplishes. Must be descriptive and thorough — short or vague goals will cause API calls to fail. Write at least a full sentence explaining the workflow's purpose.
workflow: { name: 'Haiku Poetry Generator', // Polished display name — the ERC-8004 agent name users see goal: 'Transform any theme or emotion into a beautiful traditional 5-7-5 haiku poem using AI', trigger: triggers.x402({ ... }), task: { description: 'Generate a haiku about the given topic' } }
import { triggers } from '@openserv-labs/client'
triggers.webhook({ waitForCompletion: true, timeout: 600 })
triggers.x402({ name: '...', description: '...', price: '0.01', timeout: 600 })
triggers.cron({ schedule: '0 9 * * *' })
triggers.manual()
Important: Always set
timeoutto at least 600 seconds (10 minutes) for webhook and x402 triggers. Agents often take significant time to process requests — especially when performing research, content generation, or other complex tasks. A low timeout will cause premature failures. For multi-agent pipelines with many sequential steps, consider 900 seconds or more.
provision() creates two types of credentials. They are not interchangeable :
OPENSERV_API_KEY (Agent API key) — Used internally by the SDK to authenticate when receiving tasks. Set automatically by provision() when you pass agent.instance. Do not use this key with PlatformClient.WALLET_PRIVATE_KEY / OPENSERV_USER_API_KEY (User credentials) — Used with PlatformClient to make management calls (list tasks, debug workflows, etc.). Authenticate with client.authenticate(walletKey) or pass apiKey to the constructor.If you need to debug tasks or inspect workflows, use wallet authentication:
const client = new PlatformClient()
await client.authenticate(process.env.WALLET_PRIVATE_KEY)
const tasks = await client.tasks.list({ workflowId: result.workflowId })
See troubleshooting.md for details on 401 errors.
npm run dev
The run() function automatically:
agents-proxy.openserv.aiNo need for ngrok or other tunneling tools - run() handles this seamlessly. Just call run(agent) and your local agent is accessible to the platform.
Deploy your agent to the OpenServ managed cloud with a single command:
npx @openserv-labs/client deploy [path]
Where [path] is the directory containing your agent code (defaults to current directory).
OPENSERV_USER_API_KEY in .env — Your .env file in the agent directory must contain OPENSERV_USER_API_KEY. Get this from the OpenServ platform dashboard. This key is required by the deploy command (and by PlatformClient for management operations). Note that provision() itself does not need this key — it creates its own wallet, authenticates, and persists credentials to .openserv.json independently. The user API key is also saved to .openserv.json after provision if present.
Callprovision() first — must run at least once before deploying. It registers the agent on the platform and persists credentials to . The recommended agent template already calls before in , so starting the agent locally ( or ) is enough. If your code does not call (e.g., you only call in a custom script), you must add an explicit call and run it once before deploying.
1. Set OPENSERV_USER_API_KEY in .env
2. Call provision() during local startup (npm run dev) — registers the agent and writes .openserv.json
3. npx @openserv-labs/client deploy .
When deploying to a hosting provider like Cloud Run, set DISABLE_TUNNEL=true as an environment variable. This makes run() start only the HTTP server without opening a WebSocket tunnel — the platform reaches your agent directly at its public URL.
await provision({
agent: {
name: 'my-agent',
description: '...',
endpointUrl: 'https://my-agent.example.com' // Required for production
},
workflow: {
name: 'Lightning Service Pro',
goal: 'Describe in detail what this workflow does — be thorough, vague goals cause failures',
trigger: triggers.webhook({ waitForCompletion: true, timeout: 600 }),
task: { description: 'Process incoming requests' }
}
})
// With DISABLE_TUNNEL=true, run() starts only the HTTP server (no tunnel)
await run(agent)
After provisioning, register your agent on-chain for discoverability via the Identity Registry.
Requires ETH on Base. Registration calls
register()on the ERC-8004 contract on Base mainnet (chain 8453) , which costs gas. The wallet created byprovision()starts with a zero balance. Fund it with a small amount of ETH on Base before the first registration attempt. The wallet address is logged during provisioning (Created new wallet: 0x...).
Always wrap in try/catch so a registration failure (e.g. unfunded wallet) doesn't prevent
run(agent)from starting.
Two important patterns:
dotenv programmatically (not import 'dotenv/config') so you can reload .env after provision() writes WALLET_PRIVATE_KEY.dotenv.config({ override: true }) after provision() to pick up the freshly written key before ERC-8004 registration.import dotenv from 'dotenv'
dotenv.config()
import { Agent, run } from '@openserv-labs/sdk'
import { provision, triggers, PlatformClient } from '@openserv-labs/client'
// ... define agent and capabilities ...
const result = await provision({
agent: { instance: agent, name: 'my-agent', description: '...' },
workflow: {
name: 'My Service',
goal: 'Detailed description of what the workflow does',
trigger: triggers.x402({ name: 'My Service', description: '...', price: '0.01', timeout: 600 }),
task: { description: 'Process requests' }
}
})
// Reload .env to pick up WALLET_PRIVATE_KEY written by provision()
dotenv.config({ override: true })
// Register on-chain (non-blocking — requires funded wallet on Base)
try {
const client = new PlatformClient()
await client.authenticate(process.env.WALLET_PRIVATE_KEY)
const erc8004 = await client.erc8004.registerOnChain({
workflowId: result.workflowId,
privateKey: process.env.WALLET_PRIVATE_KEY!,
name: 'My Service',
description: 'What this agent does'
})
console.log(`Agent ID: ${erc8004.agentId}`) // "8453:42"
console.log(`TX: ${erc8004.blockExplorerUrl}`)
console.log(`Scan: ${erc8004.scanUrl}`) // "https://www.8004scan.io/agents/base/42"
} catch (error) {
console.warn('ERC-8004 registration skipped:', error instanceof Error ? error.message : error)
}
await run(agent)
chainId / rpcUrl for others.See openserv-client skill for the full ERC-8004 API reference and troubleshooting.
this.process() inside capabilities — Legacy method requiring an OpenAI API key. Use this.generate() instead (platform-delegated, no key needed), or use runless capabilitiesdoTask override — The SDK handles task execution automaticallythis.completeTask() — Task completion is handled by the Runtime APIThis skill may be outdated. The OpenServ team ships updates regularly—new features, API changes, and bug fixes won't be reflected here until you update. If something isn't working as expected, or you want to make sure you have the latest information, update the skill before proceeding.
# Check if updates are available
npx skills check
# Update all installed skills to latest versions
npx skills update
Or reinstall the OpenServ skills directly:
npx skills add openserv-labs/skills
Weekly Installs
101
Repository
GitHub Stars
13
First Seen
Feb 4, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
codex88
opencode73
kimi-cli64
gemini-cli64
amp64
github-copilot64
AI Elements:基于shadcn/ui的AI原生应用组件库,快速构建对话界面
66,200 周安装
Firebase Authentication 集成指南 - 快速实现用户认证与安全管理
363 周安装
TypeScript高级类型完全指南:泛型、条件类型、映射类型、实用工具类型详解
372 周安装
Claude技能:自动化文档工作流 - 智能管理项目文档生命周期(CLAUDE.md/README.md)
364 周安装
Claude项目工作流技能:9个命令自动化项目生命周期,节省35-55分钟
365 周安装
Hugging Face Datasets 技能:数据集管理、SQL查询与多格式支持工具
366 周安装
使用.NET Aspire配置Akka.NET:集群编排、持久化与Kubernetes部署指南
101 周安装
provision().openserv.jsonprovision()run(agent)main()npm run devnpx tsx src/agent.tsprovision()run(agent)provision()