ai-model-wechat by tencentcloudbase/skills
npx skills add https://github.com/tencentcloudbase/skills --skill ai-model-wechat当需要在微信小程序中使用 wx.cloud.extend.AI 调用 AI 模型时,请使用此技能。
在以下情况下使用它:
请勿用于:
ai-model-web 技能ai-model-nodejs 技能ai-model-nodejs 技能(小程序中不可用)http-api 技能CloudBase 提供以下内置的提供商和模型:
| 提供商 |
|---|
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 模型 |
|---|
| 推荐 |
|---|
hunyuan-exp | hunyuan-turbos-latest, hunyuan-t1-latest, hunyuan-2.0-thinking-20251109, hunyuan-2.0-instruct-20251111 | ✅ hunyuan-2.0-instruct-20251111 |
deepseek | deepseek-r1-0528, deepseek-v3-0324, deepseek-v3.2 | ✅ deepseek-v3.2 |
// app.js
App({
onLaunch: function() {
wx.cloud.init({ env: "<YOUR_ENV_ID>" });
}
})
⚠️ 与 JS/Node SDK 不同: 返回值是原始模型响应。
const model = wx.cloud.extend.AI.createModel("hunyuan-exp");
const res = await model.generateText({
model: "hunyuan-2.0-instruct-20251111", // 推荐模型
messages: [{ role: "user", content: "你好" }],
});
// ⚠️ 返回值是原始模型响应,不像 JS/Node SDK 那样包装过
console.log(res.choices[0].message.content); // 通过 choices 数组访问
console.log(res.usage); // Token 用量
⚠️ 与 JS/Node SDK 不同: 参数必须包装在 data 对象中,支持回调。
const model = wx.cloud.extend.AI.createModel("hunyuan-exp");
// ⚠️ 参数必须包装在 `data` 对象中
const res = await model.streamText({
data: { // ⚠️ 必需的包装器
model: "hunyuan-2.0-instruct-20251111", // 推荐模型
messages: [{ role: "user", content: "hi" }]
},
onText: (text) => { // 可选:增量文本回调
console.log("新文本:", text);
},
onEvent: ({ data }) => { // 可选:原始事件回调
console.log("事件:", data);
},
onFinish: (fullText) => { // 可选:完成回调
console.log("完成:", fullText);
}
});
// 也支持异步迭代
for await (let str of res.textStream) {
console.log(str);
}
// 使用 eventStream 检查是否完成
for await (let event of res.eventStream) {
console.log(event);
if (event.data === "[DONE]") { // ⚠️ 检查 [DONE] 以停止
break;
}
}
| 功能 | JS/Node SDK | 微信小程序 |
|---|---|---|
| 命名空间 | app.ai() | wx.cloud.extend.AI |
| generateText 参数 | 直接对象 | 直接对象 |
| generateText 返回值 | { text, usage, messages } | 原始格式:{ choices, usage } |
| streamText 参数 | 直接对象 | ⚠️ 包装在 data: {...} 中 |
| streamText 返回值 | { textStream, dataStream } | { textStream, eventStream } |
| 回调函数 | 不支持 | onText, onEvent, onFinish |
| 图像生成 | 仅 Node SDK 支持 | 不可用 |
interface WxStreamTextInput {
data: { // ⚠️ 必需的包装对象
model: string;
messages: Array<{
role: "user" | "system" | "assistant";
content: string;
}>;
};
onText?: (text: string) => void; // 增量文本回调
onEvent?: (prop: { data: string }) => void; // 原始事件回调
onFinish?: (text: string) => void; // 完成回调
}
interface WxStreamTextResult {
textStream: AsyncIterable<string>; // 增量文本流
eventStream: AsyncIterable<{ // 原始事件流
event?: unknown;
id?: unknown;
data: string; // 完成时为 "[DONE]"
}>;
}
// 原始模型响应(OpenAI 兼容格式)
interface WxGenerateTextResponse {
id: string;
object: "chat.completion";
created: number;
model: string;
choices: Array<{
index: number;
message: {
role: "assistant";
content: string;
};
finish_reason: string;
}>;
usage: {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
};
}
onText 非常适合实时显示eventStream 时,检查 event.data === "[DONE]" 以停止data 包装器 - streamText 参数必须包装在 data: {...} 中每周安装量
626
代码仓库
GitHub 星标数
38
首次出现
Jan 22, 2026
安全审计
安装于
opencode553
codex549
gemini-cli540
github-copilot523
cursor517
kimi-cli513
Use this skill for calling AI models in WeChat Mini Program using wx.cloud.extend.AI.
Use it when you need to:
Do NOT use for:
ai-model-web skillai-model-nodejs skillai-model-nodejs skill (not available in Mini Program)http-api skillCloudBase provides these built-in providers and models:
| Provider | Models | Recommended |
|---|---|---|
hunyuan-exp | hunyuan-turbos-latest, hunyuan-t1-latest, hunyuan-2.0-thinking-20251109, hunyuan-2.0-instruct-20251111 | ✅ hunyuan-2.0-instruct-20251111 |
deepseek | deepseek-r1-0528, deepseek-v3-0324, deepseek-v3.2 |
// app.js
App({
onLaunch: function() {
wx.cloud.init({ env: "<YOUR_ENV_ID>" });
}
})
⚠️ Different from JS/Node SDK: Return value is raw model response.
const model = wx.cloud.extend.AI.createModel("hunyuan-exp");
const res = await model.generateText({
model: "hunyuan-2.0-instruct-20251111", // Recommended model
messages: [{ role: "user", content: "你好" }],
});
// ⚠️ Return value is RAW model response, NOT wrapped like JS/Node SDK
console.log(res.choices[0].message.content); // Access via choices array
console.log(res.usage); // Token usage
⚠️ Different from JS/Node SDK: Must wrap parameters in data object, supports callbacks.
const model = wx.cloud.extend.AI.createModel("hunyuan-exp");
// ⚠️ Parameters MUST be wrapped in `data` object
const res = await model.streamText({
data: { // ⚠️ Required wrapper
model: "hunyuan-2.0-instruct-20251111", // Recommended model
messages: [{ role: "user", content: "hi" }]
},
onText: (text) => { // Optional: incremental text callback
console.log("New text:", text);
},
onEvent: ({ data }) => { // Optional: raw event callback
console.log("Event:", data);
},
onFinish: (fullText) => { // Optional: completion callback
console.log("Done:", fullText);
}
});
// Async iteration also available
for await (let str of res.textStream) {
console.log(str);
}
// Check for completion with eventStream
for await (let event of res.eventStream) {
console.log(event);
if (event.data === "[DONE]") { // ⚠️ Check for [DONE] to stop
break;
}
}
| Feature | JS/Node SDK | WeChat Mini Program |
|---|---|---|
| Namespace | app.ai() | wx.cloud.extend.AI |
| generateText params | Direct object | Direct object |
| generateText return | { text, usage, messages } | Raw: { choices, usage } |
| streamText params | Direct object | ⚠️ Wrapped in data: {...} |
interface WxStreamTextInput {
data: { // ⚠️ Required wrapper object
model: string;
messages: Array<{
role: "user" | "system" | "assistant";
content: string;
}>;
};
onText?: (text: string) => void; // Incremental text callback
onEvent?: (prop: { data: string }) => void; // Raw event callback
onFinish?: (text: string) => void; // Completion callback
}
interface WxStreamTextResult {
textStream: AsyncIterable<string>; // Incremental text stream
eventStream: AsyncIterable<{ // Raw event stream
event?: unknown;
id?: unknown;
data: string; // "[DONE]" when complete
}>;
}
// Raw model response (OpenAI-compatible format)
interface WxGenerateTextResponse {
id: string;
object: "chat.completion";
created: number;
model: string;
choices: Array<{
index: number;
message: {
role: "assistant";
content: string;
};
finish_reason: string;
}>;
usage: {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
};
}
onText is great for real-time displayeventStream, check event.data === "[DONE]" to stopdata wrapper - streamText params must be wrapped in data: {...}Weekly Installs
626
Repository
GitHub Stars
38
First Seen
Jan 22, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode553
codex549
gemini-cli540
github-copilot523
cursor517
kimi-cli513
Azure 配额管理指南:服务限制、容量验证与配额增加方法
79,700 周安装
✅ deepseek-v3.2 |
| streamText return | { textStream, dataStream } | { textStream, eventStream } |
| Callbacks | Not supported | onText, onEvent, onFinish |
| Image generation | Node SDK only | Not available |