openai-assistants by jezweb/claude-skills
npx skills add https://github.com/jezweb/claude-skills --skill openai-assistants状态:生产就绪(⚠️ 已弃用 - 将于 2026 年 8 月 26 日停止服务)包:openai@6.16.0 最后更新:2026-01-21 v1 弃用:2024 年 12 月 18 日 v2 停止服务:2026 年 8 月 26 日(请迁移至 Responses API)
OpenAI 正在弃用 Assistants API,转而支持 Responses API。
时间线:v1 于 2024 年 12 月 18 日弃用 | v2 将于 2026 年 8 月 26 日停止服务
在以下情况下使用此技能:维护遗留应用程序或迁移现有代码(12-18 个月的窗口期)不要在以下情况下使用:开始新项目(请改用 openai-responses 技能)
迁移:请参阅 references/migration-to-responses.md
npm install openai@6.16.0
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
// 1. 创建助手
const assistant = await openai.beta.assistants.create({
name: "Math Tutor",
instructions: "You are a math tutor. Use code interpreter for calculations.",
tools: [{ type: "code_interpreter" }],
model: "gpt-5",
});
// 2. 创建线程
const thread = await openai.beta.threads.create();
// 3. 添加消息
await openai.beta.threads.messages.create(thread.id, {
role: "user",
content: "Solve: 3x + 11 = 14",
});
// 4. 运行助手
const run = await openai.beta.threads.runs.create(thread.id, {
assistant_id: assistant.id,
});
// 5. 轮询完成状态
let status = await openai.beta.threads.runs.retrieve(thread.id, run.id);
while (status.status !== 'completed') {
await new Promise(r => setTimeout(r, 1000));
status = await openai.beta.threads.runs.retrieve(thread.id, run.id);
}
// 6. 获取响应
const messages = await openai.beta.threads.messages.list(thread.id);
console.log(messages.data[0].content[0].text.value);
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
四个主要对象:
const assistant = await openai.beta.assistants.create({
model: "gpt-5",
instructions: "系统提示(v2 中最多 256k 字符)",
tools: [{ type: "code_interpreter" }, { type: "file_search" }],
tool_resources: { file_search: { vector_store_ids: ["vs_123"] } },
});
关键限制:256k 指令字符(v2),最多 128 个工具,16 个元数据键值对
// 创建带消息的线程
const thread = await openai.beta.threads.create({
messages: [{ role: "user", content: "Hello" }],
});
// 添加带附件的消息
await openai.beta.threads.messages.create(thread.id, {
role: "user",
content: "分析这个",
attachments: [{ file_id: "file_123", tools: [{ type: "code_interpreter" }] }],
});
// 列出消息
const msgs = await openai.beta.threads.messages.list(thread.id);
关键限制:每个线程最多 10 万条消息
// 创建运行,可覆盖参数
const run = await openai.beta.threads.runs.create(thread.id, {
assistant_id: "asst_123",
additional_messages: [{ role: "user", content: "Question" }],
max_prompt_tokens: 1000,
max_completion_tokens: 500,
});
// 轮询直到完成
let status = await openai.beta.threads.runs.retrieve(thread.id, run.id);
while (['queued', 'in_progress'].includes(status.status)) {
await new Promise(r => setTimeout(r, 1000));
status = await openai.beta.threads.runs.retrieve(thread.id, run.id);
}
运行状态:queued → in_progress → requires_action(函数调用)/ completed / failed / cancelled / expired(最长 10 分钟)
const stream = await openai.beta.threads.runs.stream(thread.id, { assistant_id });
for await (const event of stream) {
if (event.event === 'thread.message.delta') {
process.stdout.write(event.data.delta.content?.[0]?.text?.value || '');
}
}
关键事件:thread.run.created, thread.message.delta(流式内容), thread.run.step.delta(工具进度), thread.run.completed, thread.run.requires_action(函数调用)
在沙箱中运行 Python 代码。生成图表,处理文件(CSV、JSON、PDF、图像)。每个文件最大 512MB。
// 将文件附加到消息
attachments: [{ file_id: "file_123", tools: [{ type: "code_interpreter" }] }]
// 访问生成的文件
for (const content of message.content) {
if (content.type === 'image_file') {
const fileContent = await openai.files.content(content.image_file.file_id);
}
}
使用向量存储进行语义搜索。最多 10,000 个文件(v2,v1 为 20)。定价:$0.10/GB/天(1GB 免费)。
// 创建向量存储
const vs = await openai.beta.vectorStores.create({ name: "Docs" });
await openai.beta.vectorStores.files.create(vs.id, { file_id: "file_123" });
// 等待索引完成
let store = await openai.beta.vectorStores.retrieve(vs.id);
while (store.status === 'in_progress') {
await new Promise(r => setTimeout(r, 2000));
store = await openai.beta.vectorStores.retrieve(vs.id);
}
// 在助手中使用
tool_resources: { file_search: { vector_store_ids: [vs.id] } }
⚠️ 使用前请等待 status: 'completed'
当 run.status === 'requires_action' 时提交工具输出:
if (run.status === 'requires_action') {
const toolCalls = run.required_action.submit_tool_outputs.tool_calls;
const outputs = toolCalls.map(tc => ({
tool_call_id: tc.id,
output: JSON.stringify(yourFunction(JSON.parse(tc.function.arguments))),
}));
run = await openai.beta.threads.runs.submitToolOutputs(thread.id, run.id, {
tool_outputs: outputs,
});
}
代码解释器:.c, .cpp, .csv, .docx, .html, .java, .json, .md, .pdf, .php, .pptx, .py, .rb, .tex, .txt, .css, .jpeg, .jpg, .js, .gif, .png, .tar, .ts, .xlsx, .xml, .zip(最大 512MB)
文件搜索:.c, .cpp, .docx, .html, .java, .json, .md, .pdf, .php, .pptx, .py, .rb, .tex, .txt, .css, .js, .ts, .go(最大 512MB)
1. 线程已有活动运行
Error: 400 Can't add messages to thread_xxx while a run run_xxx is active.
修复:首先取消活动运行:await openai.beta.threads.runs.cancel(threadId, runId)
2. 运行轮询超时 / 不完整状态
Error: OpenAIError: Final run has not been received
原因:长时间运行的任务可能超过轮询窗口或以 incomplete 状态结束 预防:优雅处理不完整的运行
try {
const stream = await openai.beta.threads.runs.stream(thread.id, { assistant_id });
for await (const event of stream) {
if (event.event === 'thread.message.delta') {
process.stdout.write(event.data.delta.content?.[0]?.text?.value || '');
}
}
} catch (error) {
if (error.message?.includes('Final run has not been received')) {
// 运行以 'incomplete' 状态结束 - 线程可以继续
const run = await openai.beta.threads.runs.retrieve(thread.id, runId);
if (run.status === 'incomplete') {
// 处理:提示用户继续,减少 max_completion_tokens 等
}
}
}
来源:GitHub Issues #945, #1306, #1439
3. 向量存储未就绪 在索引完成前使用向量存储。修复:轮询 vectorStores.retrieve() 直到 status === 'completed'(参见文件搜索部分)
4. 文件上传格式问题 不支持的文件格式会导致静默失败。修复:上传前验证文件扩展名(参见文件格式部分)
5. 向量存储上传文档不正确
Error: No 'files' provided to process
原因:官方文档展示了 uploadAndPoll 的错误用法 预防:将文件流包装在 { files: [...] } 对象中
// ✅ 正确
await openai.beta.vectorStores.fileBatches.uploadAndPoll(vectorStoreId, {
files: fileStreams
});
// ❌ 错误(官方文档所示)
await openai.beta.vectorStores.fileBatches.uploadAndPoll(vectorStoreId, fileStreams);
6. 推理模型拒绝温度参数
Error: Unsupported parameter: 'temperature' is not supported with this model
原因:当将助手更新到 o3-mini/o1-preview/o1-mini 时,旧的温度设置会保留 预防:显式将温度设置为 null
await openai.beta.assistants.update(assistantId, {
model: 'o3-mini',
reasoning_effort: 'medium',
temperature: null, // ✅ 必须显式清除
top_p: null
});
7. uploadAndPoll 返回向量存储 ID 而非批次 ID
Error: Invalid 'batch_id': 'vs_...'. Expected an ID that begins with 'vsfb_'.
原因:uploadAndPoll 返回向量存储对象而非批次对象 预防:使用替代方法获取批次 ID
// 选项 1:在单独上传后使用 createAndPoll
const batch = await openai.vectorStores.fileBatches.createAndPoll(
vectorStoreId,
{ file_ids: uploadedFileIds }
);
// 选项 2:列出批次以找到正确的 ID
const batches = await openai.vectorStores.fileBatches.list(vectorStoreId);
const batchId = batches.data[0].id; // 以 'vsfb_' 开头
8. 向量存储文件删除影响所有存储 警告:从一个向量存储中删除文件会将其从所有向量存储中移除
// ❌ 这会从 VS_A、VS_B 和 VS_C 中删除文件
await openai.vectorStores.files.delete('VS_A', 'file-xxx');
原因:SDK 或 API 错误 - 删除操作具有全局影响 预防:如果需要选择性删除,请避免在多个向量存储之间共享文件 来源:GitHub Issue #1710
9. 大文件上传中的内存泄漏(社区提供) 来源:GitHub Issue #1052 | 状态:开放 影响:在长时间运行的服务器中,每上传 22MB 文件约泄漏 44MB 内存 原因:当使用 vectorStores.fileBatches.uploadAndPoll 从流(S3 等)上传大文件时,上传完成后内存可能不会被释放 已验证:维护者已确认,在 v4.58.1 中已减少但未消除 变通方案:在长期运行的服务器中监控内存使用情况;定期重启或使用单独的 worker 进程
10. 线程已有活动运行 - 竞态条件(社区提供) 对问题 #1 的增强:当取消活动运行时,如果运行在取消前完成,可能会发生竞态条件
async function createRunSafely(threadId: string, assistantId: string) {
// 首先检查活动运行
const runs = await openai.beta.threads.runs.list(threadId, { limit: 1 });
const activeRun = runs.data.find(r =>
['queued', 'in_progress', 'requires_action'].includes(r.status)
);
if (activeRun) {
try {
await openai.beta.threads.runs.cancel(threadId, activeRun.id);
// 等待取消完成
let run = await openai.beta.threads.runs.retrieve(threadId, activeRun.id);
while (run.status === 'cancelling') {
await new Promise(r => setTimeout(r, 500));
run = await openai.beta.threads.runs.retrieve(threadId, activeRun.id);
}
} catch (error) {
// 忽略“已完成”错误 - 运行自然结束
if (!error.message?.includes('completed')) throw error;
}
}
return openai.beta.threads.runs.create(threadId, { assistant_id: assistantId });
}
来源:OpenAI 社区论坛
完整目录请参阅 references/top-errors.md。
openai-api(聊天补全):无状态,手动管理历史记录,直接响应。用于简单生成。
openai-responses(Responses API):✅ 推荐用于新项目。更好的推理能力,现代化的 MCP 集成,积极开发中。
openai-assistants:⚠️ 将于 2026 年上半年弃用。仅用于遗留应用程序。迁移:references/migration-to-responses.md
v1 弃用:2024 年 12 月 18 日
关键变更:retrieval → file_search,向量存储(10k 文件 vs 20),256k 指令(vs 32k),消息级文件附件
请参阅 references/migration-from-v1.md
模板:templates/basic-assistant.ts, code-interpreter-assistant.ts, file-search-assistant.ts, function-calling-assistant.ts, streaming-assistant.ts
参考:references/top-errors.md, thread-lifecycle.md, vector-stores.md, migration-to-responses.md, migration-from-v1.md
相关技能:openai-responses(推荐), openai-api
最后更新:2026-01-21 包:openai@6.16.0 状态:生产就绪(⚠️ 已弃用 - 将于 2026 年 8 月 26 日停止服务)变更:新增 6 个已知问题(向量存储上传错误、o3-mini 温度、内存泄漏),增强了流式错误处理
每周安装量
313
代码库
GitHub 星标
643
首次出现
2026 年 1 月 20 日
安全审计
安装于
claude-code260
gemini-cli207
opencode205
cursor198
antigravity191
codex177
Status : Production Ready (⚠️ Deprecated - Sunset August 26, 2026) Package : openai@6.16.0 Last Updated : 2026-01-21 v1 Deprecated : December 18, 2024 v2 Sunset : August 26, 2026 (migrate to Responses API)
OpenAI is deprecating Assistants API in favor ofResponses API.
Timeline : v1 deprecated Dec 18, 2024 | v2 sunset August 26, 2026
Use this skill if : Maintaining legacy apps or migrating existing code (12-18 month window) Don't use if : Starting new projects (use openai-responses skill instead)
Migration : See references/migration-to-responses.md
npm install openai@6.16.0
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
// 1. Create assistant
const assistant = await openai.beta.assistants.create({
name: "Math Tutor",
instructions: "You are a math tutor. Use code interpreter for calculations.",
tools: [{ type: "code_interpreter" }],
model: "gpt-5",
});
// 2. Create thread
const thread = await openai.beta.threads.create();
// 3. Add message
await openai.beta.threads.messages.create(thread.id, {
role: "user",
content: "Solve: 3x + 11 = 14",
});
// 4. Run assistant
const run = await openai.beta.threads.runs.create(thread.id, {
assistant_id: assistant.id,
});
// 5. Poll for completion
let status = await openai.beta.threads.runs.retrieve(thread.id, run.id);
while (status.status !== 'completed') {
await new Promise(r => setTimeout(r, 1000));
status = await openai.beta.threads.runs.retrieve(thread.id, run.id);
}
// 6. Get response
const messages = await openai.beta.threads.messages.list(thread.id);
console.log(messages.data[0].content[0].text.value);
Four Main Objects:
const assistant = await openai.beta.assistants.create({
model: "gpt-5",
instructions: "System prompt (max 256k chars in v2)",
tools: [{ type: "code_interpreter" }, { type: "file_search" }],
tool_resources: { file_search: { vector_store_ids: ["vs_123"] } },
});
Key Limits : 256k instruction chars (v2), 128 tools max, 16 metadata pairs
// Create thread with messages
const thread = await openai.beta.threads.create({
messages: [{ role: "user", content: "Hello" }],
});
// Add message with attachments
await openai.beta.threads.messages.create(thread.id, {
role: "user",
content: "Analyze this",
attachments: [{ file_id: "file_123", tools: [{ type: "code_interpreter" }] }],
});
// List messages
const msgs = await openai.beta.threads.messages.list(thread.id);
Key Limits : 100k messages per thread
// Create run with optional overrides
const run = await openai.beta.threads.runs.create(thread.id, {
assistant_id: "asst_123",
additional_messages: [{ role: "user", content: "Question" }],
max_prompt_tokens: 1000,
max_completion_tokens: 500,
});
// Poll until complete
let status = await openai.beta.threads.runs.retrieve(thread.id, run.id);
while (['queued', 'in_progress'].includes(status.status)) {
await new Promise(r => setTimeout(r, 1000));
status = await openai.beta.threads.runs.retrieve(thread.id, run.id);
}
Run States : queued → in_progress → requires_action (function calling) / completed / failed / cancelled / expired (10 min max)
const stream = await openai.beta.threads.runs.stream(thread.id, { assistant_id });
for await (const event of stream) {
if (event.event === 'thread.message.delta') {
process.stdout.write(event.data.delta.content?.[0]?.text?.value || '');
}
}
Key Events : thread.run.created, thread.message.delta (streaming content), thread.run.step.delta (tool progress), thread.run.completed, thread.run.requires_action (function calling)
Runs Python code in sandbox. Generates charts, processes files (CSV, JSON, PDF, images). Max 512MB per file.
// Attach file to message
attachments: [{ file_id: "file_123", tools: [{ type: "code_interpreter" }] }]
// Access generated files
for (const content of message.content) {
if (content.type === 'image_file') {
const fileContent = await openai.files.content(content.image_file.file_id);
}
}
Semantic search with vector stores. 10,000 files max (v2, was 20 in v1). Pricing : $0.10/GB/day (1GB free).
// Create vector store
const vs = await openai.beta.vectorStores.create({ name: "Docs" });
await openai.beta.vectorStores.files.create(vs.id, { file_id: "file_123" });
// Wait for indexing
let store = await openai.beta.vectorStores.retrieve(vs.id);
while (store.status === 'in_progress') {
await new Promise(r => setTimeout(r, 2000));
store = await openai.beta.vectorStores.retrieve(vs.id);
}
// Use in assistant
tool_resources: { file_search: { vector_store_ids: [vs.id] } }
⚠️ Wait forstatus: 'completed' before using
Submit tool outputs when run.status === 'requires_action':
if (run.status === 'requires_action') {
const toolCalls = run.required_action.submit_tool_outputs.tool_calls;
const outputs = toolCalls.map(tc => ({
tool_call_id: tc.id,
output: JSON.stringify(yourFunction(JSON.parse(tc.function.arguments))),
}));
run = await openai.beta.threads.runs.submitToolOutputs(thread.id, run.id, {
tool_outputs: outputs,
});
}
Code Interpreter : .c, .cpp, .csv, .docx, .html, .java, .json, .md, .pdf, .php, .pptx, .py, .rb, .tex, .txt, .css, .jpeg, .jpg, .js, .gif, .png, .tar, .ts, .xlsx, .xml, .zip (512MB max)
File Search : .c, .cpp, .docx, .html, .java, .json, .md, .pdf, .php, .pptx, .py, .rb, .tex, .txt, .css, .js, .ts, .go (512MB max)
1. Thread Already Has Active Run
Error: 400 Can't add messages to thread_xxx while a run run_xxx is active.
Fix : Cancel active run first: await openai.beta.threads.runs.cancel(threadId, runId)
2. Run Polling Timeout / Incomplete Status
Error: OpenAIError: Final run has not been received
Why It Happens : Long-running tasks may exceed polling windows or finish with incomplete status Prevention : Handle incomplete runs gracefully
try {
const stream = await openai.beta.threads.runs.stream(thread.id, { assistant_id });
for await (const event of stream) {
if (event.event === 'thread.message.delta') {
process.stdout.write(event.data.delta.content?.[0]?.text?.value || '');
}
}
} catch (error) {
if (error.message?.includes('Final run has not been received')) {
// Run ended with 'incomplete' status - thread can continue
const run = await openai.beta.threads.runs.retrieve(thread.id, runId);
if (run.status === 'incomplete') {
// Handle: prompt user to continue, reduce max_completion_tokens, etc.
}
}
}
Source : GitHub Issues #945, #1306, #1439
3. Vector Store Not Ready Using vector store before indexing completes. Fix : Poll vectorStores.retrieve() until status === 'completed' (see File Search section)
4. File Upload Format Issues Unsupported file formats cause silent failures. Fix : Validate file extensions before upload (see File Formats section)
5. Vector Store Upload Documentation Incorrect
Error: No 'files' provided to process
Why It Happens : Official documentation shows incorrect usage of uploadAndPoll Prevention : Wrap file streams in { files: [...] } object
// ✅ Correct
await openai.beta.vectorStores.fileBatches.uploadAndPoll(vectorStoreId, {
files: fileStreams
});
// ❌ Wrong (shown in official docs)
await openai.beta.vectorStores.fileBatches.uploadAndPoll(vectorStoreId, fileStreams);
Source : GitHub Issue #1337
6. Reasoning Models Reject Temperature Parameter
Error: Unsupported parameter: 'temperature' is not supported with this model
Why It Happens : When updating assistant to o3-mini/o1-preview/o1-mini, old temperature settings persist Prevention : Explicitly set temperature to null
await openai.beta.assistants.update(assistantId, {
model: 'o3-mini',
reasoning_effort: 'medium',
temperature: null, // ✅ Must explicitly clear
top_p: null
});
Source : GitHub Issue #1318
7. uploadAndPoll Returns Vector Store ID Instead of Batch ID
Error: Invalid 'batch_id': 'vs_...'. Expected an ID that begins with 'vsfb_'.
Why It Happens : uploadAndPoll returns vector store object instead of batch object Prevention : Use alternative methods to get batch ID
// Option 1: Use createAndPoll after separate upload
const batch = await openai.vectorStores.fileBatches.createAndPoll(
vectorStoreId,
{ file_ids: uploadedFileIds }
);
// Option 2: List batches to find correct ID
const batches = await openai.vectorStores.fileBatches.list(vectorStoreId);
const batchId = batches.data[0].id; // starts with 'vsfb_'
Source : GitHub Issue #1700
8. Vector Store File Delete Affects All Stores Warning : Deleting a file from one vector store removes it from ALL vector stores
// ❌ This deletes file from VS_A, VS_B, AND VS_C
await openai.vectorStores.files.delete('VS_A', 'file-xxx');
Why It Happens : SDK or API bug - delete operation has global effect Prevention : Avoid sharing files across multiple vector stores if selective deletion is needed Source : GitHub Issue #1710
9. Memory Leak in Large File Uploads (Community-sourced) Source : GitHub Issue #1052 | Status : OPEN Impact : ~44MB leaked per 22MB file upload in long-running servers Why It Happens : When uploading large files from streams (S3, etc.) using vectorStores.fileBatches.uploadAndPoll, memory may not be released after upload completes Verified : Maintainer acknowledged, reduced in v4.58.1 but not eliminated Workaround : Monitor memory usage in long-lived servers; restart periodically or use separate worker processes
10. Thread Already Has Active Run - Race Condition (Community-sourced) Enhancement to Issue #1 : When canceling an active run, race conditions may occur if the run completes before cancellation
async function createRunSafely(threadId: string, assistantId: string) {
// Check for active runs first
const runs = await openai.beta.threads.runs.list(threadId, { limit: 1 });
const activeRun = runs.data.find(r =>
['queued', 'in_progress', 'requires_action'].includes(r.status)
);
if (activeRun) {
try {
await openai.beta.threads.runs.cancel(threadId, activeRun.id);
// Wait for cancellation to complete
let run = await openai.beta.threads.runs.retrieve(threadId, activeRun.id);
while (run.status === 'cancelling') {
await new Promise(r => setTimeout(r, 500));
run = await openai.beta.threads.runs.retrieve(threadId, activeRun.id);
}
} catch (error) {
// Ignore "already completed" errors - run finished naturally
if (!error.message?.includes('completed')) throw error;
}
}
return openai.beta.threads.runs.create(threadId, { assistant_id: assistantId });
}
Source : OpenAI Community Forum
See references/top-errors.md for complete catalog.
openai-api (Chat Completions): Stateless, manual history, direct responses. Use for simple generation.
openai-responses (Responses API): ✅ Recommended for new projects. Better reasoning, modern MCP integration, active development.
openai-assistants : ⚠️ Deprecated H1 2026. Use for legacy apps only. Migration: references/migration-to-responses.md
v1 deprecated : Dec 18, 2024
Key Changes : retrieval → file_search, vector stores (10k files vs 20), 256k instructions (vs 32k), message-level file attachments
See references/migration-from-v1.md
Templates : templates/basic-assistant.ts, code-interpreter-assistant.ts, file-search-assistant.ts, function-calling-assistant.ts, streaming-assistant.ts
References : references/top-errors.md, thread-lifecycle.md, vector-stores.md, migration-to-responses.md, migration-from-v1.md
Related Skills : openai-responses (recommended), openai-api
Last Updated : 2026-01-21 Package : openai@6.16.0 Status : Production Ready (⚠️ Deprecated - Sunset August 26, 2026) Changes : Added 6 new known issues (vector store upload bugs, o3-mini temperature, memory leak), enhanced streaming error handling
Weekly Installs
313
Repository
GitHub Stars
643
First Seen
Jan 20, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
claude-code260
gemini-cli207
opencode205
cursor198
antigravity191
codex177
AI 代码实施计划编写技能 | 自动化开发任务分解与 TDD 流程规划工具
42,300 周安装