deepseek by vm0-ai/vm0-skills
npx skills add https://github.com/vm0-ai/vm0-skills --skill deepseek通过直接的 curl 调用使用 DeepSeek API,以访问强大的 AI 语言模型,用于聊天、推理和代码生成。
官方文档:
https://api-docs.deepseek.com/
在以下情况下使用此技能:
export DEEPSEEK_API_KEY="your-api-key"
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 类型 | 价格 |
|---|
| 输入(缓存命中) | $0.028 |
| 输入(缓存未命中) | $0.28 |
| 输出 | $0.42 |
DeepSeek 不强制执行严格的速率限制。他们会尽力服务每个请求。在高流量期间,连接会通过保持活动信号来维持。
以下所有示例均假设您已设置 DEEPSEEK_API_KEY。
DeepSeek API 的基础 URL 是:
https://api.deepseek.com(推荐)https://api.deepseek.com/v1(OpenAI 兼容)发送简单的聊天消息:
写入 /tmp/deepseek_request.json:
{
"model": "deepseek-chat",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello, who are you?"
}
]
}
然后运行:
curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json
可用模型:
deepseek-chat:DeepSeek-V3.2 非思考模式(128K 上下文,8K 最大输出)deepseek-reasoner:DeepSeek-V3.2 思考模式(128K 上下文,64K 最大输出)通过 temperature 调整创造力/随机性:
写入 /tmp/deepseek_request.json:
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "Write a short poem about coding."
}
],
"temperature": 0.7,
"max_tokens": 200
}
然后运行:
curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json | jq -r '.choices[0].message.content'
参数:
temperature(0-2,默认 1):值越高越有创意,越低越确定top_p(0-1,默认 1):核心采样阈值max_tokens:要生成的最大 token 数获取实时的逐 token 输出:
写入 /tmp/deepseek_request.json:
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "Explain quantum computing in simple terms."
}
],
"stream": true
}
然后运行:
curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json
流式返回服务器发送事件(SSE),包含增量数据块,以 data: [DONE] 结束。
使用推理器模型处理复杂的推理任务:
写入 /tmp/deepseek_request.json:
{
"model": "deepseek-reasoner",
"messages": [
{
"role": "user",
"content": "What is 15 * 17? Show your work."
}
]
}
然后运行:
curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json | jq -r '.choices[0].message.content'
推理器模型擅长数学、逻辑和多步骤问题。
强制模型返回有效的 JSON:
写入 /tmp/deepseek_request.json:
{
"model": "deepseek-chat",
"messages": [
{
"role": "system",
"content": "You are a JSON generator. Always respond with valid JSON."
},
{
"role": "user",
"content": "List 3 programming languages with their main use cases."
}
],
"response_format": {
"type": "json_object"
}
}
然后运行:
curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json | jq -r '.choices[0].message.content'
使用消息历史记录继续对话:
写入 /tmp/deepseek_request.json:
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "My name is Alice."
},
{
"role": "assistant",
"content": "Nice to meet you, Alice."
},
{
"role": "user",
"content": "What is my name?"
}
]
}
然后运行:
curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json | jq -r '.choices[0].message.content'
使用中间填充进行代码补全(beta 端点):
写入 /tmp/deepseek_request.json:
{
"model": "deepseek-chat",
"prompt": "def add(a, b):\n ",
"max_tokens": 20
}
然后运行:
curl -s "https://api.deepseek.com/beta/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json | jq -r '.choices[0].text'
FIM 适用于:
定义模型可以调用的函数:
写入 /tmp/deepseek_request.json:
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "What is the weather in Tokyo?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city name"
}
},
"required": ["location"]
}
}
}
]
}
然后运行:
curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json
当模型想要使用函数时,将返回一个 tool_calls 数组。
从响应中提取使用信息:
写入 /tmp/deepseek_request.json:
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "Hello"
}
]
}
然后运行:
curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json | jq '.usage'
响应包括:
prompt_tokens:输入 token 计数completion_tokens:输出 token 计数total_tokens:两者之和DeepSeek 与 OpenAI SDK 完全兼容。只需更改基础 URL:
Python:
from openai import OpenAI
client = OpenAI(api_key="your-deepseek-key", base_url="https://api.deepseek.com")
Node.js:
import OpenAI from 'openai';
const client = new OpenAI({ apiKey: 'your-deepseek-key', baseURL: 'https://api.deepseek.com' });
对于包含嵌套 JSON 的复杂请求(如函数调用),使用临时文件以避免 shell 转义问题:
写入 /tmp/deepseek_request.json:
{
"model": "deepseek-chat",
"messages": [{"role": "user", "content": "What is the weather in Tokyo?"}],
"tools": [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather",
"parameters": {
"type": "object",
"properties": {"location": {"type": "string"}},
"required": ["location"]
}
}
}]
}
然后运行:
curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json
deepseek-chat,复杂推理使用 deepseek-reasonerresponse_format 时,在系统消息中包含 JSON 指令api.deepseek.com/beta-d @filename 的临时文件以避免 shell 引用问题每周安装数
124
代码仓库
GitHub 星标数
48
首次出现
2026年1月24日
安全审计
安装于
gemini-cli109
opencode108
codex107
github-copilot106
amp102
cursor101
Use the DeepSeek API via direct curl calls to access powerful AI language models for chat, reasoning, and code generation.
Official docs:
https://api-docs.deepseek.com/
Use this skill when you need to:
export DEEPSEEK_API_KEY="your-api-key"
| Type | Price |
|---|---|
| Input (cache hit) | $0.028 |
| Input (cache miss) | $0.28 |
| Output | $0.42 |
DeepSeek does not enforce strict rate limits. They will try to serve every request. During high traffic, connections are maintained with keep-alive signals.
All examples below assume you have DEEPSEEK_API_KEY set.
The base URL for the DeepSeek API is:
https://api.deepseek.com (recommended)https://api.deepseek.com/v1 (OpenAI-compatible)Send a simple chat message:
Write to /tmp/deepseek_request.json:
{
"model": "deepseek-chat",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello, who are you?"
}
]
}
Then run:
curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json
Available models:
deepseek-chat: DeepSeek-V3.2 non-thinking mode (128K context, 8K max output)deepseek-reasoner: DeepSeek-V3.2 thinking mode (128K context, 64K max output)Adjust creativity/randomness with temperature:
Write to /tmp/deepseek_request.json:
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "Write a short poem about coding."
}
],
"temperature": 0.7,
"max_tokens": 200
}
Then run:
curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json | jq -r '.choices[0].message.content'
Parameters:
temperature (0-2, default 1): Higher = more creative, lower = more deterministictop_p (0-1, default 1): Nucleus sampling thresholdmax_tokens: Maximum tokens to generateGet real-time token-by-token output:
Write to /tmp/deepseek_request.json:
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "Explain quantum computing in simple terms."
}
],
"stream": true
}
Then run:
curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json
Streaming returns Server-Sent Events (SSE) with delta chunks, ending with data: [DONE].
Use the reasoner model for complex reasoning tasks:
Write to /tmp/deepseek_request.json:
{
"model": "deepseek-reasoner",
"messages": [
{
"role": "user",
"content": "What is 15 * 17? Show your work."
}
]
}
Then run:
curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json | jq -r '.choices[0].message.content'
The reasoner model excels at math, logic, and multi-step problems.
Force the model to return valid JSON:
Write to /tmp/deepseek_request.json:
{
"model": "deepseek-chat",
"messages": [
{
"role": "system",
"content": "You are a JSON generator. Always respond with valid JSON."
},
{
"role": "user",
"content": "List 3 programming languages with their main use cases."
}
],
"response_format": {
"type": "json_object"
}
}
Then run:
curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json | jq -r '.choices[0].message.content'
Continue a conversation with message history:
Write to /tmp/deepseek_request.json:
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "My name is Alice."
},
{
"role": "assistant",
"content": "Nice to meet you, Alice."
},
{
"role": "user",
"content": "What is my name?"
}
]
}
Then run:
curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json | jq -r '.choices[0].message.content'
Use Fill-in-the-Middle for code completion (beta endpoint):
Write to /tmp/deepseek_request.json:
{
"model": "deepseek-chat",
"prompt": "def add(a, b):\n ",
"max_tokens": 20
}
Then run:
curl -s "https://api.deepseek.com/beta/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json | jq -r '.choices[0].text'
FIM is useful for:
Define functions the model can call:
Write to /tmp/deepseek_request.json:
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "What is the weather in Tokyo?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city name"
}
},
"required": ["location"]
}
}
}
]
}
Then run:
curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json
The model will return a tool_calls array when it wants to use a function.
Extract usage information from response:
Write to /tmp/deepseek_request.json:
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "Hello"
}
]
}
Then run:
curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json | jq '.usage'
Response includes:
prompt_tokens: Input token countcompletion_tokens: Output token counttotal_tokens: Sum of bothDeepSeek is fully compatible with OpenAI SDKs. Just change the base URL:
Python:
from openai import OpenAI
client = OpenAI(api_key="your-deepseek-key", base_url="https://api.deepseek.com")
Node.js:
import OpenAI from 'openai';
const client = new OpenAI({ apiKey: 'your-deepseek-key', baseURL: 'https://api.deepseek.com' });
For complex requests with nested JSON (like function calling), use a temp file to avoid shell escaping issues:
Write to /tmp/deepseek_request.json:
{
"model": "deepseek-chat",
"messages": [{"role": "user", "content": "What is the weather in Tokyo?"}],
"tools": [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather",
"parameters": {
"type": "object",
"properties": {"location": {"type": "string"}},
"required": ["location"]
}
}
}]
}
Then run:
curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $(printenv DEEPSEEK_API_KEY)" -d @/tmp/deepseek_request.json
deepseek-chat for general tasks, deepseek-reasoner for complex reasoningresponse_format, include JSON instructions in system messageapi.deepseek.com/beta-d @filename to avoid shell quoting issuesWeekly Installs
124
Repository
GitHub Stars
48
First Seen
Jan 24, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
gemini-cli109
opencode108
codex107
github-copilot106
amp102
cursor101
超能力技能使用指南:AI助手技能调用优先级与工作流程详解
48,700 周安装