openrouter-typescript-sdk by openrouterteam/agent-skills
npx skills add https://github.com/openrouterteam/agent-skills --skill openrouter-typescript-sdk一个全面的 TypeScript SDK,用于与 OpenRouter 的统一 API 交互,通过单一的类型安全接口提供对 300 多个 AI 模型的访问。此技能使 AI 代理能够利用 callModel 模式进行文本生成、工具使用、流式传输和多轮对话。
npm install @openrouter/sdk
从 openrouter.ai/settings/keys 获取您的 API 密钥,然后进行初始化:
import OpenRouter from '@openrouter/sdk';
const client = new OpenRouter({
apiKey: process.env.OPENROUTER_API_KEY
});
SDK 支持两种身份验证方法:用于服务器端应用程序的 API 密钥和用于面向用户的应用程序的 OAuth PKCE 流程。
主要的身份验证方法使用来自您 OpenRouter 账户的 API 密钥。
export OPENROUTER_API_KEY=sk-or-v1-your-key-here
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
import OpenRouter from '@openrouter/sdk';
const client = new OpenRouter({
apiKey: process.env.OPENROUTER_API_KEY
});
客户端会自动将此密钥用于所有后续请求:
// API 密钥会自动包含
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Hello!'
});
检索当前配置的 API 密钥的信息:
const keyInfo = await client.apiKeys.getCurrentKeyMetadata();
console.log('Key name:', keyInfo.name);
console.log('Created:', keyInfo.createdAt);
以编程方式管理 API 密钥:
// 列出所有密钥
const keys = await client.apiKeys.list();
// 创建新密钥
const newKey = await client.apiKeys.create({
name: 'Production API Key'
});
// 通过哈希值获取特定密钥
const key = await client.apiKeys.get({
hash: 'sk-or-v1-...'
});
// 更新密钥
await client.apiKeys.update({
hash: 'sk-or-v1-...',
requestBody: {
name: 'Updated Key Name'
}
});
// 删除密钥
await client.apiKeys.delete({
hash: 'sk-or-v1-...'
});
对于用户应控制自己 API 密钥的面向用户的应用程序,OpenRouter 支持带有 PKCE(Proof Key for Code Exchange)的 OAuth。此流程允许用户通过浏览器授权流程生成 API 密钥,而无需您的应用程序处理其凭据。
生成授权码和 URL 以启动 OAuth 流程:
const authResponse = await client.oAuth.createAuthCode({
callbackUrl: 'https://myapp.com/auth/callback'
});
// authResponse 包含:
// - authorizationUrl:重定向用户的 URL
// - code:用于后续交换的授权码
console.log('Redirect user to:', authResponse.authorizationUrl);
参数:
| 参数 | 类型 | 必需 | 描述 |
|---|---|---|---|
callbackUrl | string | 是 | 用户授权后应用程序的回调 URL |
浏览器重定向:
// 在浏览器环境中
window.location.href = authResponse.authorizationUrl;
// 或在服务器渲染的应用程序中,返回重定向响应
res.redirect(authResponse.authorizationUrl);
用户授权您的应用程序后,他们会被重定向回您的回调 URL,并附带一个授权码。将此代码交换为 API 密钥:
// 在您的回调处理程序中
const code = req.query.code; // 来自重定向 URL
const apiKeyResponse = await client.oAuth.exchangeAuthCodeForAPIKey({
code: code
});
// apiKeyResponse 包含:
// - key:用户的 API 密钥
// - 关于密钥的额外元数据
const userApiKey = apiKeyResponse.key;
// 安全存储以供此用户未来请求使用
await saveUserApiKey(userId, userApiKey);
参数:
| 参数 | 类型 | 必需 | 描述 |
|---|---|---|---|
code | string | 是 | 来自 OAuth 重定向的授权码 |
import OpenRouter from '@openrouter/sdk';
import express from 'express';
const app = express();
const client = new OpenRouter({
apiKey: process.env.OPENROUTER_API_KEY // 用于 OAuth 操作的应用程序密钥
});
// 步骤 1:启动 OAuth 流程
app.get('/auth/start', async (req, res) => {
const authResponse = await client.oAuth.createAuthCode({
callbackUrl: 'https://myapp.com/auth/callback'
});
// 存储回调所需的任何状态
req.session.oauthState = { /* ... */ };
// 将用户重定向到 OpenRouter 授权页面
res.redirect(authResponse.authorizationUrl);
});
// 步骤 2:处理回调并交换代码
app.get('/auth/callback', async (req, res) => {
const { code } = req.query;
if (!code) {
return res.status(400).send('Authorization code missing');
}
try {
const apiKeyResponse = await client.oAuth.exchangeAuthCodeForAPIKey({
code: code as string
});
// 安全存储用户的 API 密钥
await saveUserApiKey(req.session.userId, apiKeyResponse.key);
res.redirect('/dashboard?auth=success');
} catch (error) {
console.error('OAuth exchange failed:', error);
res.redirect('/auth/error');
}
});
// 步骤 3:使用用户的 API 密钥处理其请求
app.post('/api/chat', async (req, res) => {
const userApiKey = await getUserApiKey(req.session.userId);
// 使用用户的密钥创建客户端
const userClient = new OpenRouter({
apiKey: userApiKey
});
const result = userClient.callModel({
model: 'openai/gpt-5-nano',
input: req.body.message
});
const text = await result.getText();
res.json({ response: text });
});
callModel 函数是文本生成的主要接口。它提供了一种统一、类型安全的方式来与任何支持的模型交互。
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Explain quantum computing in one sentence.',
});
const text = await result.getText();
SDK 为 input 参数接受灵活的输入类型:
简单的字符串成为用户消息:
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Hello, how are you?'
});
用于多轮对话:
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: [
{ role: 'user', content: 'What is the capital of France?' },
{ role: 'assistant', content: 'The capital of France is Paris.' },
{ role: 'user', content: 'What is its population?' }
]
});
包含图像和文本:
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: [
{
role: 'user',
content: [
{ type: 'text', text: 'What is in this image?' },
{ type: 'image_url', image_url: { url: 'https://example.com/image.png' } }
]
}
]
});
使用 instructions 参数进行系统级指导:
const result = client.callModel({
model: 'openai/gpt-5-nano',
instructions: 'You are a helpful coding assistant. Be concise.',
input: 'How do I reverse a string in Python?'
});
结果对象提供了多种方法来消费响应:
| 方法 | 用途 |
|---|---|
getText() | 在所有工具完成后获取完整文本 |
getResponse() | 包含令牌使用情况的完整响应对象 |
getTextStream() | 流式传输到达的文本增量 |
getReasoningStream() | 流式传输推理令牌(用于 o1/reasoning 模型) |
getToolCallsStream() | 流式传输完成的工具调用 |
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Write a haiku about coding'
});
const text = await result.getText();
console.log(text);
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Hello!'
});
const response = await result.getResponse();
console.log('Text:', response.text);
console.log('Token usage:', response.usage);
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Write a short story'
});
for await (const delta of result.getTextStream()) {
process.stdout.write(delta);
}
使用 Zod 模式创建强类型工具,以实现自动验证和类型推断。
import { tool } from '@openrouter/sdk';
import { z } from 'zod';
const weatherTool = tool({
name: 'get_weather',
description: 'Get current weather for a location',
inputSchema: z.object({
location: z.string().describe('City name'),
units: z.enum(['celsius', 'fahrenheit']).optional().default('celsius')
}),
outputSchema: z.object({
temperature: z.number(),
conditions: z.string(),
humidity: z.number()
}),
execute: async (params) => {
// 实现天气获取逻辑
return {
temperature: 22,
conditions: 'Sunny',
humidity: 45
};
}
});
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'What is the weather in Paris?',
tools: [weatherTool]
});
const text = await result.getText();
// SDK 会自动执行工具并继续对话
返回结果的执行函数:
const calculatorTool = tool({
name: 'calculate',
description: 'Perform mathematical calculations',
inputSchema: z.object({
expression: z.string()
}),
execute: async ({ expression }) => {
return { result: eval(expression) };
}
});
使用 eventSchema 产生进度事件:
const searchTool = tool({
name: 'web_search',
description: 'Search the web',
inputSchema: z.object({ query: z.string() }),
eventSchema: z.object({
type: z.literal('progress'),
message: z.string()
}),
outputSchema: z.object({ results: z.array(z.string()) }),
execute: async function* ({ query }) {
yield { type: 'progress', message: 'Searching...' };
yield { type: 'progress', message: 'Processing results...' };
return { results: ['Result 1', 'Result 2'] };
}
});
设置 execute: false 以自行处理工具调用:
const manualTool = tool({
name: 'user_confirmation',
description: 'Request user confirmation',
inputSchema: z.object({ message: z.string() }),
execute: false
});
使用停止条件控制自动工具执行:
import { stepCountIs, maxCost, hasToolCall } from '@openrouter/sdk';
const result = client.callModel({
model: 'openai/gpt-5.2',
input: 'Research this topic thoroughly',
tools: [searchTool, analyzeTool],
stopWhen: [
stepCountIs(10), // 10 轮后停止
maxCost(1.00), // 成本超过 $1.00 时停止
hasToolCall('finish') // 调用 'finish' 工具时停止
]
});
| 条件 | 描述 |
|---|---|
stepCountIs(n) | n 轮后停止 |
maxCost(amount) | 成本超过指定金额时停止 |
hasToolCall(name) | 调用特定工具时停止 |
const customStop = (context) => {
return context.messages.length > 20;
};
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Complex task',
tools: [myTool],
stopWhen: customStop
});
根据对话上下文计算参数:
const result = client.callModel({
model: (ctx) => ctx.numberOfTurns > 3 ? 'openai/gpt-4' : 'openai/gpt-4o-mini',
temperature: (ctx) => ctx.numberOfTurns > 1 ? 0.3 : 0.7,
input: 'Hello!'
});
| 属性 | 类型 | 描述 |
|---|---|---|
numberOfTurns | number | 当前轮次计数 |
messages | array | 到目前为止的所有消息 |
instructions | string | 当前系统指令 |
totalCost | number | 累计成本 |
工具可以修改后续轮次的参数,从而实现技能和上下文感知行为:
const skillTool = tool({
name: 'load_skill',
description: 'Load a specialized skill',
inputSchema: z.object({
skill: z.string().describe('Name of the skill to load')
}),
nextTurnParams: {
instructions: (params, context) => {
const skillInstructions = loadSkillInstructions(params.skill);
return `${context.instructions}\n\n${skillInstructions}`;
}
},
execute: async ({ skill }) => {
return { loaded: skill };
}
});
使用这些参数控制模型行为:
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Write a creative story',
temperature: 0.7, // 创造性(0-2,默认值因模型而异)
maxOutputTokens: 1000, // 生成的最大令牌数
topP: 0.9, // 核心采样参数
frequencyPenalty: 0.5, // 减少重复
presencePenalty: 0.5, // 鼓励新主题
stop: ['\n\n'] // 停止序列
});
所有流式传输方法都支持从单个结果对象进行并发消费:
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Write a detailed explanation'
});
// 消费者 1:将文本流式传输到控制台
const textPromise = (async () => {
for await (const delta of result.getTextStream()) {
process.stdout.write(delta);
}
})();
// 消费者 2:同时获取完整响应
const responsePromise = result.getResponse();
// 两者同时运行
const [, response] = await Promise.all([textPromise, responsePromise]);
console.log('\n\nTotal tokens:', response.usage.totalTokens);
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Search for information about TypeScript',
tools: [searchTool]
});
for await (const toolCall of result.getToolCallsStream()) {
console.log(`Tool called: ${toolCall.name}`);
console.log(`Arguments: ${JSON.stringify(toolCall.arguments)}`);
console.log(`Result: ${JSON.stringify(toolCall.result)}`);
}
在生态系统格式之间转换以实现互操作性:
import { fromChatMessages, toChatMessage } from '@openrouter/sdk';
// OpenAI 消息 → OpenRouter 格式
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: fromChatMessages(openaiMessages)
});
// 响应 → OpenAI 聊天消息格式
const response = await result.getResponse();
const chatMsg = toChatMessage(response);
import { fromClaudeMessages, toClaudeMessage } from '@openrouter/sdk';
// Claude 消息 → OpenRouter 格式
const result = client.callModel({
model: 'anthropic/claude-3-opus',
input: fromClaudeMessages(claudeMessages)
});
// 响应 → Claude 消息格式
const response = await result.getResponse();
const claudeMsg = toClaudeMessage(response);
SDK 使用 OpenResponses 格式处理消息。理解这些结构对于构建健壮的代理至关重要。
消息包含一个 role 属性,用于确定消息类型:
| 角色 | 描述 |
|---|---|
user | 用户提供的输入 |
assistant | 模型生成的响应 |
system | 系统指令 |
developer | 开发者级指令 |
tool | 工具执行结果 |
来自用户或助手的简单文本内容:
interface TextMessage {
role: 'user' | 'assistant';
content: string;
}
具有混合内容类型的消息:
interface MultimodalMessage {
role: 'user';
content: Array<
| { type: 'input_text'; text: string }
| { type: 'input_image'; imageUrl: string; detail?: 'auto' | 'low' | 'high' }
| {
type: 'image';
source: {
type: 'url' | 'base64';
url?: string;
media_type?: string;
data?: string
}
}
>;
}
当模型请求执行工具时:
interface ToolCallMessage {
role: 'assistant';
content?: null;
tool_calls?: Array<{
id: string;
type: 'function';
function: {
name: string;
arguments: string; // JSON 编码的参数
};
}>;
}
工具执行后返回的结果:
interface ToolResultMessage {
role: 'tool';
tool_call_id: string;
content: string; // JSON 编码的结果
}
来自 getResponse() 的完整响应对象:
interface OpenResponsesNonStreamingResponse {
output: Array<ResponseMessage>;
usage?: {
inputTokens: number;
outputTokens: number;
cachedTokens?: number;
};
finishReason?: string;
warnings?: Array<{
type: string;
message: string
}>;
experimental_providerMetadata?: Record<string, unknown>;
}
响应数组中的输出消息:
// 文本/内容消息
interface ResponseOutputMessage {
type: 'message';
role: 'assistant';
content: string | Array<ContentPart>;
reasoning?: string; // 用于推理模型(o1 等)
}
// 输出中的工具结果
interface FunctionCallOutputMessage {
type: 'function_call_output';
call_id: string;
output: string;
}
当从响应中解析工具调用时:
interface ParsedToolCall {
id: string;
name: string;
arguments: unknown; // 根据 inputSchema 验证
}
工具完成执行后:
interface ToolExecutionResult {
toolCallId: string;
toolName: string;
result: unknown; // 根据 outputSchema 验证
preliminaryResults?: unknown[]; // 来自生成器工具
error?: Error;
}
在自定义停止条件回调中可用:
interface StepResult {
stepType: 'initial' | 'continue';
text: string;
toolCalls: ParsedToolCall[];
toolResults: ToolExecutionResult[];
response: OpenResponsesNonStreamingResponse;
usage?: {
inputTokens: number;
outputTokens: number;
cachedTokens?: number;
};
finishReason?: string;
warnings?: Array<{ type: string; message: string }>;
experimental_providerMetadata?: Record<string, unknown>;
}
对工具和动态参数函数可用:
interface TurnContext {
numberOfTurns: number; // 轮次计数(从 1 开始)
turnRequest?: OpenResponsesRequest; // 当前正在进行的请求
toolCall?: OpenResponsesFunctionToolCall; // 当前工具调用(在工具上下文中)
}
SDK 提供了多种流式传输方法,这些方法会产生不同的事件类型。
getFullResponsesStream() 方法产生以下事件类型:
type EnhancedResponseStreamEvent =
| ResponseCreatedEvent
| ResponseInProgressEvent
| OutputTextDeltaEvent
| OutputTextDoneEvent
| ReasoningDeltaEvent
| ReasoningDoneEvent
| FunctionCallArgumentsDeltaEvent
| FunctionCallArgumentsDoneEvent
| ResponseCompletedEvent
| ToolPreliminaryResultEvent;
| 事件类型 | 描述 | 负载 |
|---|---|---|
response.created | 响应对象已初始化 | { response: ResponseObject } |
response.in_progress | 生成已开始 | {} |
response.output_text.delta | 收到文本块 | { delta: string } |
response.output_text.done | 文本生成完成 | { text: string } |
response.reasoning.delta | 推理块(o1 模型) | { delta: string } |
response.reasoning.done | 推理完成 | { reasoning: string } |
response.function_call_arguments.delta | 工具参数块 | { delta: string } |
response.function_call_arguments.done | 工具参数完成 | { arguments: string } |
response.completed | 完整响应完成 | { response: ResponseObject } |
tool.preliminary_result | 生成器工具进度 | { toolCallId: string; result: unknown } |
interface OutputTextDeltaEvent {
type: 'response.output_text.delta';
delta: string;
}
用于推理模型(o1 等):
interface ReasoningDeltaEvent {
type: 'response.reasoning.delta';
delta: string;
}
interface FunctionCallArgumentsDeltaEvent {
type: 'response.function_call_arguments.delta';
delta: string;
}
来自产生进度的生成器工具:
interface ToolPreliminaryResultEvent {
type: 'tool.preliminary_result';
toolCallId: string;
result: unknown; // 与工具的 eventSchema 匹配
}
interface ResponseCompletedEvent {
type: 'response.completed';
response: OpenResponsesNonStreamingResponse;
}
getToolStream() 方法产生:
type ToolStreamEvent =
| { type: 'delta'; content: string }
| { type: 'preliminary_result'; toolCallId: string; result: unknown };
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Analyze this data',
tools: [analysisTool]
});
for await (const event of result.getFullResponsesStream()) {
switch (event.type) {
case 'response.output_text.delta':
process.stdout.write(event.delta);
break;
case 'response.reasoning.delta':
console.log('[Reasoning]', event.delta);
break;
case 'response.function_call_arguments.delta':
console.log('[Tool Args]', event.delta);
break;
case 'tool.preliminary_result':
console.log(`[Progress: ${event.toolCallId}]`, event.result);
break;
case 'response.completed':
console.log('\n[Complete]', event.response.usage);
break;
}
}
getNewMessagesStream() 产生 OpenResponses 格式的更新:
type MessageStreamUpdate =
| ResponsesOutputMessage // 文本/内容更新
| OpenResponsesFunctionCallOutput; // 工具结果
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Research this topic',
tools: [searchTool]
});
const allMessages: MessageStreamUpdate[] = [];
for await (const message of result.getNewMessagesStream()) {
allMessages.push(message);
if (message.type === 'message') {
console.log('Assistant:', message.content);
} else if (message.type === 'function_call_output') {
console.log('Tool result:', message.output);
}
}
除了 callModel,客户端还提供对其他 API 端点的访问:
const client = new OpenRouter({
apiKey: process.env.OPENROUTER_API_KEY
});
// 列出可用模型
const models = await client.models.list();
// 聊天补全(callModel 的替代方案)
const completion = await client.chat.send({
model: 'openai/gpt-5-nano',
messages: [{ role: 'user', content: 'Hello!' }]
});
// 传统补全格式
const legacyCompletion = await client.completions.generate({
model: 'openai/gpt-5-nano',
prompt: 'Once upon a time'
});
// 使用情况分析
const activity = await client.analytics.getUserActivity();
// 信用余额
const credits = await client.credits.getCredits();
// API 密钥管理
const keys = await client.apiKeys.list();
SDK 提供具有可操作消息的特定错误类型:
try {
const result = await client.callModel({
model: 'openai/gpt-5-nano',
input: 'Hello!'
});
const text = await result.getText();
} catch (error) {
if (error.statusCode === 401) {
console.error('Invalid API key - check your OPENROUTER_API_KEY');
} else if (error.statusCode === 402) {
console.error('Insufficient credits - add credits at openrouter.ai');
} else if (error.statusCode === 429) {
console.error('Rate limited - implement backoff retry');
} else if (error.statusCode === 503) {
console.error('Model temporarily unavailable - try again or use fallback');
} else {
console.error('Unexpected error:', error.message);
}
}
| 代码 | 含义 | 操作 |
|---|---|---|
| 400 | 错误请求 | 检查请求参数 |
| 401 | 未授权 | 验证 API 密钥 |
| 402 | 需要付款 | 添加信用 |
| 429 | 速率限制 | 实现指数退避重试 |
| 500 | 服务器错误 | 使用退避重试 |
| 503 | 服务不可用 | 尝试替代模型 |
import OpenRouter, { tool, stepCountIs } from '@openrouter/sdk';
import { z } from 'zod';
const client = new OpenRouter({
apiKey: process.env.OPENROUTER_API_KEY
});
// 定义工具
const searchTool = tool({
name: 'web_search',
description: 'Search the web for information',
inputSchema: z.object({
query: z.string().describe('Search query')
}),
outputSchema: z.object({
results: z.array(z.object({
title: z.string(),
snippet: z.string(),
url: z.string()
}))
}),
execute: async ({ query }) => {
// 实现实际搜索
return {
results: [
{ title: 'Example', snippet: 'Example result', url: 'https://example.com' }
]
};
}
});
const finishTool = tool({
name: 'finish',
description: 'Complete the task with final answer',
inputSchema: z.object({
answer: z.string().describe('The final answer')
}),
execute: async ({ answer }) => ({ answer })
});
// 运行代理
async function runAgent(task: string) {
const result = client.callModel({
model: 'openai/gpt-5-nano',
instructions: 'You are a helpful research assistant. Use web_search to find information, then use finish to provide your final answer.',
input: task,
tools: [searchTool, finishTool],
stopWhen: [
stepCountIs(10),
hasToolCall('finish')
]
});
// 流式传输进度
for await (const toolCall of result.getToolCallsStream()) {
console.log(`[${toolCall.name}] ${JSON.stringify(toolCall.arguments)}`);
}
return await result.getText();
}
// 使用
const answer = await runAgent('What are the latest developments in quantum computing?');
console.log('Final answer:', answer);
callModel 模式提供自动工具执行、类型安全和多轮处理。
Zod 提供运行时验证和出色的 TypeScript 推断:
import { z } from 'zod';
const schema = z.object({
name: z.string().min(1),
age: z.number().int().positive()
});
始终设置合理的限制以防止成本失控:
stopWhen: [stepCountIs(20), maxCost(5.00)]
为暂时性故障实现重试逻辑:
async function callWithRetry(params, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await client.callModel(params).getText();
} catch (error) {
if (error.statusCode === 429 || error.statusCode >= 500) {
await sleep(Math.pow(2, i) * 1000);
continue;
}
throw error;
}
}
}
流式传输提供更好的用户体验并允许提前终止:
for await (const delta of result.getTextStream()) {
// 增量处理
}
SDK 状态:Beta - 在 GitHub 上报告问题
每周安装量
1.2K
代码仓库
GitHub Stars
24
首次出现
Jan 19, 2026
安全审计
安装于
opencode992
gemini-cli982
codex966
github-copilot899
cursor839
amp821
A comprehensive TypeScript SDK for interacting with OpenRouter's unified API, providing access to 300+ AI models through a single, type-safe interface. This skill enables AI agents to leverage the callModel pattern for text generation, tool usage, streaming, and multi-turn conversations.
npm install @openrouter/sdk
Get your API key from openrouter.ai/settings/keys, then initialize:
import OpenRouter from '@openrouter/sdk';
const client = new OpenRouter({
apiKey: process.env.OPENROUTER_API_KEY
});
The SDK supports two authentication methods: API keys for server-side applications and OAuth PKCE flow for user-facing applications.
The primary authentication method uses API keys from your OpenRouter account.
export OPENROUTER_API_KEY=sk-or-v1-your-key-here
import OpenRouter from '@openrouter/sdk';
const client = new OpenRouter({
apiKey: process.env.OPENROUTER_API_KEY
});
The client automatically uses this key for all subsequent requests:
// API key is automatically included
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Hello!'
});
Retrieve information about the currently configured API key:
const keyInfo = await client.apiKeys.getCurrentKeyMetadata();
console.log('Key name:', keyInfo.name);
console.log('Created:', keyInfo.createdAt);
Programmatically manage API keys:
// List all keys
const keys = await client.apiKeys.list();
// Create a new key
const newKey = await client.apiKeys.create({
name: 'Production API Key'
});
// Get a specific key by hash
const key = await client.apiKeys.get({
hash: 'sk-or-v1-...'
});
// Update a key
await client.apiKeys.update({
hash: 'sk-or-v1-...',
requestBody: {
name: 'Updated Key Name'
}
});
// Delete a key
await client.apiKeys.delete({
hash: 'sk-or-v1-...'
});
For user-facing applications where users should control their own API keys, OpenRouter supports OAuth with PKCE (Proof Key for Code Exchange). This flow allows users to generate API keys through a browser authorization flow without your application handling their credentials.
Generate an authorization code and URL to start the OAuth flow:
const authResponse = await client.oAuth.createAuthCode({
callbackUrl: 'https://myapp.com/auth/callback'
});
// authResponse contains:
// - authorizationUrl: URL to redirect the user to
// - code: The authorization code for later exchange
console.log('Redirect user to:', authResponse.authorizationUrl);
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
callbackUrl | string | Yes | Your application's callback URL after user authorization |
Browser Redirect:
// In a browser environment
window.location.href = authResponse.authorizationUrl;
// Or in a server-rendered app, return a redirect response
res.redirect(authResponse.authorizationUrl);
After the user authorizes your application, they are redirected back to your callback URL with an authorization code. Exchange this code for an API key:
// In your callback handler
const code = req.query.code; // From the redirect URL
const apiKeyResponse = await client.oAuth.exchangeAuthCodeForAPIKey({
code: code
});
// apiKeyResponse contains:
// - key: The user's API key
// - Additional metadata about the key
const userApiKey = apiKeyResponse.key;
// Store securely for this user's future requests
await saveUserApiKey(userId, userApiKey);
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
code | string | Yes | The authorization code from the OAuth redirect |
import OpenRouter from '@openrouter/sdk';
import express from 'express';
const app = express();
const client = new OpenRouter({
apiKey: process.env.OPENROUTER_API_KEY // Your app's key for OAuth operations
});
// Step 1: Initiate OAuth flow
app.get('/auth/start', async (req, res) => {
const authResponse = await client.oAuth.createAuthCode({
callbackUrl: 'https://myapp.com/auth/callback'
});
// Store any state needed for the callback
req.session.oauthState = { /* ... */ };
// Redirect user to OpenRouter authorization page
res.redirect(authResponse.authorizationUrl);
});
// Step 2: Handle callback and exchange code
app.get('/auth/callback', async (req, res) => {
const { code } = req.query;
if (!code) {
return res.status(400).send('Authorization code missing');
}
try {
const apiKeyResponse = await client.oAuth.exchangeAuthCodeForAPIKey({
code: code as string
});
// Store the user's API key securely
await saveUserApiKey(req.session.userId, apiKeyResponse.key);
res.redirect('/dashboard?auth=success');
} catch (error) {
console.error('OAuth exchange failed:', error);
res.redirect('/auth/error');
}
});
// Step 3: Use the user's API key for their requests
app.post('/api/chat', async (req, res) => {
const userApiKey = await getUserApiKey(req.session.userId);
// Create a client with the user's key
const userClient = new OpenRouter({
apiKey: userApiKey
});
const result = userClient.callModel({
model: 'openai/gpt-5-nano',
input: req.body.message
});
const text = await result.getText();
res.json({ response: text });
});
The callModel function is the primary interface for text generation. It provides a unified, type-safe way to interact with any supported model.
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Explain quantum computing in one sentence.',
});
const text = await result.getText();
The SDK accepts flexible input types for the input parameter:
A simple string becomes a user message:
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Hello, how are you?'
});
For multi-turn conversations:
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: [
{ role: 'user', content: 'What is the capital of France?' },
{ role: 'assistant', content: 'The capital of France is Paris.' },
{ role: 'user', content: 'What is its population?' }
]
});
Including images and text:
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: [
{
role: 'user',
content: [
{ type: 'text', text: 'What is in this image?' },
{ type: 'image_url', image_url: { url: 'https://example.com/image.png' } }
]
}
]
});
Use the instructions parameter for system-level guidance:
const result = client.callModel({
model: 'openai/gpt-5-nano',
instructions: 'You are a helpful coding assistant. Be concise.',
input: 'How do I reverse a string in Python?'
});
The result object provides multiple methods for consuming the response:
| Method | Purpose |
|---|---|
getText() | Get complete text after all tools complete |
getResponse() | Full response object with token usage |
getTextStream() | Stream text deltas as they arrive |
getReasoningStream() | Stream reasoning tokens (for o1/reasoning models) |
getToolCallsStream() | Stream tool calls as they complete |
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Write a haiku about coding'
});
const text = await result.getText();
console.log(text);
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Hello!'
});
const response = await result.getResponse();
console.log('Text:', response.text);
console.log('Token usage:', response.usage);
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Write a short story'
});
for await (const delta of result.getTextStream()) {
process.stdout.write(delta);
}
Create strongly-typed tools using Zod schemas for automatic validation and type inference.
import { tool } from '@openrouter/sdk';
import { z } from 'zod';
const weatherTool = tool({
name: 'get_weather',
description: 'Get current weather for a location',
inputSchema: z.object({
location: z.string().describe('City name'),
units: z.enum(['celsius', 'fahrenheit']).optional().default('celsius')
}),
outputSchema: z.object({
temperature: z.number(),
conditions: z.string(),
humidity: z.number()
}),
execute: async (params) => {
// Implement weather fetching logic
return {
temperature: 22,
conditions: 'Sunny',
humidity: 45
};
}
});
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'What is the weather in Paris?',
tools: [weatherTool]
});
const text = await result.getText();
// The SDK automatically executes the tool and continues the conversation
Standard execute functions that return a result:
const calculatorTool = tool({
name: 'calculate',
description: 'Perform mathematical calculations',
inputSchema: z.object({
expression: z.string()
}),
execute: async ({ expression }) => {
return { result: eval(expression) };
}
});
Yield progress events using eventSchema:
const searchTool = tool({
name: 'web_search',
description: 'Search the web',
inputSchema: z.object({ query: z.string() }),
eventSchema: z.object({
type: z.literal('progress'),
message: z.string()
}),
outputSchema: z.object({ results: z.array(z.string()) }),
execute: async function* ({ query }) {
yield { type: 'progress', message: 'Searching...' };
yield { type: 'progress', message: 'Processing results...' };
return { results: ['Result 1', 'Result 2'] };
}
});
Set execute: false to handle tool calls yourself:
const manualTool = tool({
name: 'user_confirmation',
description: 'Request user confirmation',
inputSchema: z.object({ message: z.string() }),
execute: false
});
Control automatic tool execution with stop conditions:
import { stepCountIs, maxCost, hasToolCall } from '@openrouter/sdk';
const result = client.callModel({
model: 'openai/gpt-5.2',
input: 'Research this topic thoroughly',
tools: [searchTool, analyzeTool],
stopWhen: [
stepCountIs(10), // Stop after 10 turns
maxCost(1.00), // Stop if cost exceeds $1.00
hasToolCall('finish') // Stop when 'finish' tool is called
]
});
| Condition | Description |
|---|---|
stepCountIs(n) | Stop after n turns |
maxCost(amount) | Stop when cost exceeds amount |
hasToolCall(name) | Stop when specific tool is called |
const customStop = (context) => {
return context.messages.length > 20;
};
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Complex task',
tools: [myTool],
stopWhen: customStop
});
Compute parameters based on conversation context:
const result = client.callModel({
model: (ctx) => ctx.numberOfTurns > 3 ? 'openai/gpt-4' : 'openai/gpt-4o-mini',
temperature: (ctx) => ctx.numberOfTurns > 1 ? 0.3 : 0.7,
input: 'Hello!'
});
| Property | Type | Description |
|---|---|---|
numberOfTurns | number | Current turn count |
messages | array | All messages so far |
instructions | string | Current system instructions |
totalCost | number | Accumulated cost |
Tools can modify parameters for subsequent turns, enabling skills and context-aware behavior:
const skillTool = tool({
name: 'load_skill',
description: 'Load a specialized skill',
inputSchema: z.object({
skill: z.string().describe('Name of the skill to load')
}),
nextTurnParams: {
instructions: (params, context) => {
const skillInstructions = loadSkillInstructions(params.skill);
return `${context.instructions}\n\n${skillInstructions}`;
}
},
execute: async ({ skill }) => {
return { loaded: skill };
}
});
Control model behavior with these parameters:
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Write a creative story',
temperature: 0.7, // Creativity (0-2, default varies by model)
maxOutputTokens: 1000, // Maximum tokens to generate
topP: 0.9, // Nucleus sampling parameter
frequencyPenalty: 0.5, // Reduce repetition
presencePenalty: 0.5, // Encourage new topics
stop: ['\n\n'] // Stop sequences
});
All streaming methods support concurrent consumers from a single result object:
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Write a detailed explanation'
});
// Consumer 1: Stream text to console
const textPromise = (async () => {
for await (const delta of result.getTextStream()) {
process.stdout.write(delta);
}
})();
// Consumer 2: Get full response simultaneously
const responsePromise = result.getResponse();
// Both run concurrently
const [, response] = await Promise.all([textPromise, responsePromise]);
console.log('\n\nTotal tokens:', response.usage.totalTokens);
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Search for information about TypeScript',
tools: [searchTool]
});
for await (const toolCall of result.getToolCallsStream()) {
console.log(`Tool called: ${toolCall.name}`);
console.log(`Arguments: ${JSON.stringify(toolCall.arguments)}`);
console.log(`Result: ${JSON.stringify(toolCall.result)}`);
}
Convert between ecosystem formats for interoperability:
import { fromChatMessages, toChatMessage } from '@openrouter/sdk';
// OpenAI messages → OpenRouter format
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: fromChatMessages(openaiMessages)
});
// Response → OpenAI chat message format
const response = await result.getResponse();
const chatMsg = toChatMessage(response);
import { fromClaudeMessages, toClaudeMessage } from '@openrouter/sdk';
// Claude messages → OpenRouter format
const result = client.callModel({
model: 'anthropic/claude-3-opus',
input: fromClaudeMessages(claudeMessages)
});
// Response → Claude message format
const response = await result.getResponse();
const claudeMsg = toClaudeMessage(response);
The SDK uses the OpenResponses format for messages. Understanding these shapes is essential for building robust agents.
Messages contain a role property that determines the message type:
| Role | Description |
|---|---|
user | User-provided input |
assistant | Model-generated responses |
system | System instructions |
developer | Developer-level directives |
tool | Tool execution results |
Simple text content from user or assistant:
interface TextMessage {
role: 'user' | 'assistant';
content: string;
}
Messages with mixed content types:
interface MultimodalMessage {
role: 'user';
content: Array<
| { type: 'input_text'; text: string }
| { type: 'input_image'; imageUrl: string; detail?: 'auto' | 'low' | 'high' }
| {
type: 'image';
source: {
type: 'url' | 'base64';
url?: string;
media_type?: string;
data?: string
}
}
>;
}
When the model requests a tool execution:
interface ToolCallMessage {
role: 'assistant';
content?: null;
tool_calls?: Array<{
id: string;
type: 'function';
function: {
name: string;
arguments: string; // JSON-encoded arguments
};
}>;
}
Result returned after tool execution:
interface ToolResultMessage {
role: 'tool';
tool_call_id: string;
content: string; // JSON-encoded result
}
The complete response object from getResponse():
interface OpenResponsesNonStreamingResponse {
output: Array<ResponseMessage>;
usage?: {
inputTokens: number;
outputTokens: number;
cachedTokens?: number;
};
finishReason?: string;
warnings?: Array<{
type: string;
message: string
}>;
experimental_providerMetadata?: Record<string, unknown>;
}
Output messages in the response array:
// Text/content message
interface ResponseOutputMessage {
type: 'message';
role: 'assistant';
content: string | Array<ContentPart>;
reasoning?: string; // For reasoning models (o1, etc.)
}
// Tool result in output
interface FunctionCallOutputMessage {
type: 'function_call_output';
call_id: string;
output: string;
}
When tool calls are parsed from the response:
interface ParsedToolCall {
id: string;
name: string;
arguments: unknown; // Validated against inputSchema
}
After a tool completes execution:
interface ToolExecutionResult {
toolCallId: string;
toolName: string;
result: unknown; // Validated against outputSchema
preliminaryResults?: unknown[]; // From generator tools
error?: Error;
}
Available in custom stop condition callbacks:
interface StepResult {
stepType: 'initial' | 'continue';
text: string;
toolCalls: ParsedToolCall[];
toolResults: ToolExecutionResult[];
response: OpenResponsesNonStreamingResponse;
usage?: {
inputTokens: number;
outputTokens: number;
cachedTokens?: number;
};
finishReason?: string;
warnings?: Array<{ type: string; message: string }>;
experimental_providerMetadata?: Record<string, unknown>;
}
Available to tools and dynamic parameter functions:
interface TurnContext {
numberOfTurns: number; // Turn count (1-indexed)
turnRequest?: OpenResponsesRequest; // Current request being made
toolCall?: OpenResponsesFunctionToolCall; // Current tool call (in tool context)
}
The SDK provides multiple streaming methods that yield different event types.
The getFullResponsesStream() method yields these event types:
type EnhancedResponseStreamEvent =
| ResponseCreatedEvent
| ResponseInProgressEvent
| OutputTextDeltaEvent
| OutputTextDoneEvent
| ReasoningDeltaEvent
| ReasoningDoneEvent
| FunctionCallArgumentsDeltaEvent
| FunctionCallArgumentsDoneEvent
| ResponseCompletedEvent
| ToolPreliminaryResultEvent;
| Event Type | Description | Payload |
|---|---|---|
response.created | Response object initialized | { response: ResponseObject } |
response.in_progress | Generation has started | {} |
response.output_text.delta | Text chunk received | { delta: string } |
response.output_text.done |
interface OutputTextDeltaEvent {
type: 'response.output_text.delta';
delta: string;
}
For reasoning models (o1, etc.):
interface ReasoningDeltaEvent {
type: 'response.reasoning.delta';
delta: string;
}
interface FunctionCallArgumentsDeltaEvent {
type: 'response.function_call_arguments.delta';
delta: string;
}
From generator tools that yield progress:
interface ToolPreliminaryResultEvent {
type: 'tool.preliminary_result';
toolCallId: string;
result: unknown; // Matches the tool's eventSchema
}
interface ResponseCompletedEvent {
type: 'response.completed';
response: OpenResponsesNonStreamingResponse;
}
The getToolStream() method yields:
type ToolStreamEvent =
| { type: 'delta'; content: string }
| { type: 'preliminary_result'; toolCallId: string; result: unknown };
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Analyze this data',
tools: [analysisTool]
});
for await (const event of result.getFullResponsesStream()) {
switch (event.type) {
case 'response.output_text.delta':
process.stdout.write(event.delta);
break;
case 'response.reasoning.delta':
console.log('[Reasoning]', event.delta);
break;
case 'response.function_call_arguments.delta':
console.log('[Tool Args]', event.delta);
break;
case 'tool.preliminary_result':
console.log(`[Progress: ${event.toolCallId}]`, event.result);
break;
case 'response.completed':
console.log('\n[Complete]', event.response.usage);
break;
}
}
The getNewMessagesStream() yields OpenResponses format updates:
type MessageStreamUpdate =
| ResponsesOutputMessage // Text/content updates
| OpenResponsesFunctionCallOutput; // Tool results
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Research this topic',
tools: [searchTool]
});
const allMessages: MessageStreamUpdate[] = [];
for await (const message of result.getNewMessagesStream()) {
allMessages.push(message);
if (message.type === 'message') {
console.log('Assistant:', message.content);
} else if (message.type === 'function_call_output') {
console.log('Tool result:', message.output);
}
}
Beyond callModel, the client provides access to other API endpoints:
const client = new OpenRouter({
apiKey: process.env.OPENROUTER_API_KEY
});
// List available models
const models = await client.models.list();
// Chat completions (alternative to callModel)
const completion = await client.chat.send({
model: 'openai/gpt-5-nano',
messages: [{ role: 'user', content: 'Hello!' }]
});
// Legacy completions format
const legacyCompletion = await client.completions.generate({
model: 'openai/gpt-5-nano',
prompt: 'Once upon a time'
});
// Usage analytics
const activity = await client.analytics.getUserActivity();
// Credit balance
const credits = await client.credits.getCredits();
// API key management
const keys = await client.apiKeys.list();
The SDK provides specific error types with actionable messages:
try {
const result = await client.callModel({
model: 'openai/gpt-5-nano',
input: 'Hello!'
});
const text = await result.getText();
} catch (error) {
if (error.statusCode === 401) {
console.error('Invalid API key - check your OPENROUTER_API_KEY');
} else if (error.statusCode === 402) {
console.error('Insufficient credits - add credits at openrouter.ai');
} else if (error.statusCode === 429) {
console.error('Rate limited - implement backoff retry');
} else if (error.statusCode === 503) {
console.error('Model temporarily unavailable - try again or use fallback');
} else {
console.error('Unexpected error:', error.message);
}
}
| Code | Meaning | Action |
|---|---|---|
| 400 | Bad request | Check request parameters |
| 401 | Unauthorized | Verify API key |
| 402 | Payment required | Add credits |
| 429 | Rate limited | Implement exponential backoff |
| 500 | Server error | Retry with backoff |
| 503 | Service unavailable | Try alternative model |
import OpenRouter, { tool, stepCountIs } from '@openrouter/sdk';
import { z } from 'zod';
const client = new OpenRouter({
apiKey: process.env.OPENROUTER_API_KEY
});
// Define tools
const searchTool = tool({
name: 'web_search',
description: 'Search the web for information',
inputSchema: z.object({
query: z.string().describe('Search query')
}),
outputSchema: z.object({
results: z.array(z.object({
title: z.string(),
snippet: z.string(),
url: z.string()
}))
}),
execute: async ({ query }) => {
// Implement actual search
return {
results: [
{ title: 'Example', snippet: 'Example result', url: 'https://example.com' }
]
};
}
});
const finishTool = tool({
name: 'finish',
description: 'Complete the task with final answer',
inputSchema: z.object({
answer: z.string().describe('The final answer')
}),
execute: async ({ answer }) => ({ answer })
});
// Run agent
async function runAgent(task: string) {
const result = client.callModel({
model: 'openai/gpt-5-nano',
instructions: 'You are a helpful research assistant. Use web_search to find information, then use finish to provide your final answer.',
input: task,
tools: [searchTool, finishTool],
stopWhen: [
stepCountIs(10),
hasToolCall('finish')
]
});
// Stream progress
for await (const toolCall of result.getToolCallsStream()) {
console.log(`[${toolCall.name}] ${JSON.stringify(toolCall.arguments)}`);
}
return await result.getText();
}
// Usage
const answer = await runAgent('What are the latest developments in quantum computing?');
console.log('Final answer:', answer);
The callModel pattern provides automatic tool execution, type safety, and multi-turn handling.
Zod provides runtime validation and excellent TypeScript inference:
import { z } from 'zod';
const schema = z.object({
name: z.string().min(1),
age: z.number().int().positive()
});
Always set reasonable limits to prevent runaway costs:
stopWhen: [stepCountIs(20), maxCost(5.00)]
Implement retry logic for transient failures:
async function callWithRetry(params, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await client.callModel(params).getText();
} catch (error) {
if (error.statusCode === 429 || error.statusCode >= 500) {
await sleep(Math.pow(2, i) * 1000);
continue;
}
throw error;
}
}
}
Streaming provides better UX and allows early termination:
for await (const delta of result.getTextStream()) {
// Process incrementally
}
SDK Status: Beta - Report issues on GitHub
Weekly Installs
1.2K
Repository
GitHub Stars
24
First Seen
Jan 19, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
opencode992
gemini-cli982
codex966
github-copilot899
cursor839
amp821
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
102,200 周安装
| Text generation complete |
{ text: string } |
response.reasoning.delta | Reasoning chunk (o1 models) | { delta: string } |
response.reasoning.done | Reasoning complete | { reasoning: string } |
response.function_call_arguments.delta | Tool argument chunk | { delta: string } |
response.function_call_arguments.done | Tool arguments complete | { arguments: string } |
response.completed | Full response complete | { response: ResponseObject } |
tool.preliminary_result | Generator tool progress | { toolCallId: string; result: unknown } |