ai-model-nodejs by tencentcloudbase/skills
npx skills add https://github.com/tencentcloudbase/skills --skill ai-model-nodejs在 Node.js 后端或 CloudBase 云函数中调用 AI 模型 时使用此技能,需配合 @cloudbase/node-sdk。
在以下场景中使用:
请勿用于:
ai-model-web 技能ai-model-wechat 技能http-api 技能CloudBase 提供以下内置的提供商和模型:
| 提供商 | 模型 | 推荐 |
|---|---|---|
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
hunyuan-exphunyuan-turbos-latest, hunyuan-t1-latest, hunyuan-2.0-thinking-20251109, hunyuan-2.0-instruct-20251111 |
✅ hunyuan-2.0-instruct-20251111 |
deepseek | deepseek-r1-0528, deepseek-v3-0324, deepseek-v3.2 | ✅ deepseek-v3.2 |
npm install @cloudbase/node-sdk
⚠️ AI 功能需要版本 3.16.0 或更高。 使用 npm list @cloudbase/node-sdk 检查版本。
const tcb = require('@cloudbase/node-sdk');
const app = tcb.init({ env: '<YOUR_ENV_ID>' });
exports.main = async (event, context) => {
const ai = app.ai();
// 使用 AI 功能
};
⚠️ 重要: 创建使用 AI 模型(特别是 generateImage() 和大语言模型生成)的云函数时,请设置更长的超时时间,因为这些操作可能较慢。
使用 MCP 工具 manageFunctions(action="createFunction"):
旧版兼容:如果旧的提示仍显示 createFunction,请保持相同的 payload 结构,但通过 manageFunctions(action="createFunction") 执行。
在 func 对象中设置 timeout 参数:
func.timeout (数字)推荐的超时值:
generateText):60-120 秒streamText):60-120 秒generateImage):300-900 秒(推荐:900s)const tcb = require('@cloudbase/node-sdk');
const app = tcb.init({
env: '<YOUR_ENV_ID>',
secretId: '<YOUR_SECRET_ID>',
secretKey: '<YOUR_SECRET_KEY>'
});
const ai = app.ai();
const model = ai.createModel("hunyuan-exp");
const result = await model.generateText({
model: "hunyuan-2.0-instruct-20251111", // 推荐模型
messages: [{ role: "user", content: "你好,请你介绍一下李白" }],
});
console.log(result.text); // 生成的文本字符串
console.log(result.usage); // { prompt_tokens, completion_tokens, total_tokens }
console.log(result.messages); // 完整的消息历史记录
console.log(result.rawResponses); // 原始模型响应
const model = ai.createModel("hunyuan-exp");
const res = await model.streamText({
model: "hunyuan-2.0-instruct-20251111", // 推荐模型
messages: [{ role: "user", content: "你好,请你介绍一下李白" }],
});
// 选项 1:迭代文本流(推荐)
for await (let text of res.textStream) {
console.log(text); // 增量文本块
}
// 选项 2:迭代数据流以获取完整的响应数据
for await (let data of res.dataStream) {
console.log(data); // 包含元数据的完整响应块
}
// 选项 3:获取最终结果
const messages = await res.messages; // 完整的消息历史记录
const usage = await res.usage; // Token 使用量
⚠️ 图像生成仅在 Node SDK 中可用,JS SDK (Web) 或微信小程序中不可用。
const imageModel = ai.createImageModel("hunyuan-image");
const res = await imageModel.generateImage({
model: "hunyuan-image",
prompt: "一只可爱的猫咪在草地上玩耍",
size: "1024x1024",
version: "v1.9",
});
console.log(res.data[0].url); // 图片 URL(24 小时内有效)
console.log(res.data[0].revised_prompt);// 如果 revise=true,则为修订后的提示词
interface HunyuanGenerateImageInput {
model: "hunyuan-image"; // 必需
prompt: string; // 必需:图片描述
version?: "v1.8.1" | "v1.9"; // 默认:"v1.8.1"
size?: string; // 默认:"1024x1024"
negative_prompt?: string; // 仅 v1.9
style?: string; // 仅 v1.9
revise?: boolean; // 默认:true
n?: number; // 默认:1
footnote?: string; // 水印,最多 16 个字符
seed?: number; // 范围:[1, 4294967295]
}
interface HunyuanGenerateImageOutput {
id: string;
created: number;
data: Array<{
url: string; // 图片 URL (24h 有效)
revised_prompt?: string;
}>;
}
interface BaseChatModelInput {
model: string; // 必需:模型名称
messages: Array<ChatModelMessage>; // 必需:消息数组
temperature?: number; // 可选:采样温度
topP?: number; // 可选:核采样
}
type ChatModelMessage =
| { role: "user"; content: string }
| { role: "system"; content: string }
| { role: "assistant"; content: string };
interface GenerateTextResult {
text: string; // 生成的文本
messages: Array<ChatModelMessage>; // 完整的消息历史记录
usage: Usage; // Token 使用量
rawResponses: Array<unknown>; // 原始模型响应
error?: unknown; // 错误(如果有)
}
interface StreamTextResult {
textStream: AsyncIterable<string>; // 增量文本流
dataStream: AsyncIterable<DataChunk>; // 完整数据流
messages: Promise<ChatModelMessage[]>;// 最终消息历史记录
usage: Promise<Usage>; // 最终 Token 使用量
error?: unknown; // 错误(如果有)
}
interface Usage {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
}
每周安装量
572
代码仓库
GitHub 星标数
38
首次出现
Jan 22, 2026
安全审计
安装于
opencode502
codex500
gemini-cli492
github-copilot478
kimi-cli468
amp466
Use this skill for calling AI models in Node.js backend or CloudBase cloud functions using @cloudbase/node-sdk.
Use it when you need to:
Do NOT use for:
ai-model-web skillai-model-wechat skillhttp-api skillCloudBase provides these built-in providers and models:
| Provider | Models | Recommended |
|---|---|---|
hunyuan-exp | hunyuan-turbos-latest, hunyuan-t1-latest, hunyuan-2.0-thinking-20251109, hunyuan-2.0-instruct-20251111 | ✅ hunyuan-2.0-instruct-20251111 |
deepseek | deepseek-r1-0528, deepseek-v3-0324, deepseek-v3.2 |
npm install @cloudbase/node-sdk
⚠️ AI feature requires version 3.16.0 or above. Check with npm list @cloudbase/node-sdk.
const tcb = require('@cloudbase/node-sdk');
const app = tcb.init({ env: '<YOUR_ENV_ID>' });
exports.main = async (event, context) => {
const ai = app.ai();
// Use AI features
};
⚠️ Important: When creating cloud functions that use AI models (especially generateImage() and large language model generation), set a longer timeout as these operations can be slow.
Using MCP ToolmanageFunctions(action="createFunction"):
Legacy compatibility: if an older prompt still says createFunction, keep the same payload shape but execute it through manageFunctions(action="createFunction").
Set the timeout parameter in the func object:
func.timeout (number)Recommended timeout values:
generateText): 60-120 secondsstreamText): 60-120 secondsgenerateImage): 300-900 seconds (recommended: 900s)const tcb = require('@cloudbase/node-sdk');
const app = tcb.init({
env: '<YOUR_ENV_ID>',
secretId: '<YOUR_SECRET_ID>',
secretKey: '<YOUR_SECRET_KEY>'
});
const ai = app.ai();
const model = ai.createModel("hunyuan-exp");
const result = await model.generateText({
model: "hunyuan-2.0-instruct-20251111", // Recommended model
messages: [{ role: "user", content: "你好,请你介绍一下李白" }],
});
console.log(result.text); // Generated text string
console.log(result.usage); // { prompt_tokens, completion_tokens, total_tokens }
console.log(result.messages); // Full message history
console.log(result.rawResponses); // Raw model responses
const model = ai.createModel("hunyuan-exp");
const res = await model.streamText({
model: "hunyuan-2.0-instruct-20251111", // Recommended model
messages: [{ role: "user", content: "你好,请你介绍一下李白" }],
});
// Option 1: Iterate text stream (recommended)
for await (let text of res.textStream) {
console.log(text); // Incremental text chunks
}
// Option 2: Iterate data stream for full response data
for await (let data of res.dataStream) {
console.log(data); // Full response chunk with metadata
}
// Option 3: Get final results
const messages = await res.messages; // Full message history
const usage = await res.usage; // Token usage
⚠️ Image generation is only available in Node SDK , not in JS SDK (Web) or WeChat Mini Program.
const imageModel = ai.createImageModel("hunyuan-image");
const res = await imageModel.generateImage({
model: "hunyuan-image",
prompt: "一只可爱的猫咪在草地上玩耍",
size: "1024x1024",
version: "v1.9",
});
console.log(res.data[0].url); // Image URL (valid 24 hours)
console.log(res.data[0].revised_prompt);// Revised prompt if revise=true
interface HunyuanGenerateImageInput {
model: "hunyuan-image"; // Required
prompt: string; // Required: image description
version?: "v1.8.1" | "v1.9"; // Default: "v1.8.1"
size?: string; // Default: "1024x1024"
negative_prompt?: string; // v1.9 only
style?: string; // v1.9 only
revise?: boolean; // Default: true
n?: number; // Default: 1
footnote?: string; // Watermark, max 16 chars
seed?: number; // Range: [1, 4294967295]
}
interface HunyuanGenerateImageOutput {
id: string;
created: number;
data: Array<{
url: string; // Image URL (24h valid)
revised_prompt?: string;
}>;
}
interface BaseChatModelInput {
model: string; // Required: model name
messages: Array<ChatModelMessage>; // Required: message array
temperature?: number; // Optional: sampling temperature
topP?: number; // Optional: nucleus sampling
}
type ChatModelMessage =
| { role: "user"; content: string }
| { role: "system"; content: string }
| { role: "assistant"; content: string };
interface GenerateTextResult {
text: string; // Generated text
messages: Array<ChatModelMessage>; // Full message history
usage: Usage; // Token usage
rawResponses: Array<unknown>; // Raw model responses
error?: unknown; // Error if any
}
interface StreamTextResult {
textStream: AsyncIterable<string>; // Incremental text stream
dataStream: AsyncIterable<DataChunk>; // Full data stream
messages: Promise<ChatModelMessage[]>;// Final message history
usage: Promise<Usage>; // Final token usage
error?: unknown; // Error if any
}
interface Usage {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
}
Weekly Installs
572
Repository
GitHub Stars
38
First Seen
Jan 22, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode502
codex500
gemini-cli492
github-copilot478
kimi-cli468
amp466
Azure 升级评估与自动化工具 - 轻松迁移 Functions 计划、托管层级和 SKU
64,099 周安装
OpenAPI 转 TypeScript 工具 - 自动生成 API 接口与类型守卫
563 周安装
数据库模式设计器 - 内置最佳实践,自动生成生产级SQL/NoSQL数据库架构
564 周安装
Rust Unsafe代码检查器 - 安全使用Unsafe Rust的完整指南与最佳实践
564 周安装
.NET并发编程模式指南:async/await、Channels、Akka.NET选择决策树
565 周安装
韩语语法检查器 - 基于国立国语院标准的拼写、空格、语法、标点错误检测与纠正
565 周安装
技能安全扫描器 - 检测Claude技能安全漏洞,防范提示注入与恶意代码
565 周安装
✅ deepseek-v3.2 |