The Agent Skills Directory
npx skills add https://smithery.ai/skills/pacphi/blender-3d此技能使 Claude 能够通过 BlenderMCP 插件 (ahujasid/blender-mcp - 14k+ stars) 与 Blender 交互,进行 3D 建模、场景操作、材质应用和渲染操作。
使用 Dockerfile.unified 构建时,插件会自动安装:
~/.config/blender/{4.0,4.1,4.2}/scripts/addons/uvx blender-mcp 使用addon/blender_mcp_addon.py# Linux
cp addon/blender_mcp_addon.py ~/.config/blender/4.0/scripts/addons/
# macOS
cp addon/blender_mcp_addon.py ~/Library/Application\ Support/Blender/4.0/scripts/addons/
# Windows
copy addon\blender_mcp_addon.py %APPDATA%\Blender Foundation\Blender\4.0\scripts\addons\
python3 addon/install-addon.py --blender-version 4.0
# 测试 Blender 是否在监听
nc -zv localhost 9876
# 或发送测试命令
echo '{"type":"get_scene_info","params":{}}' | nc localhost 9876
uvx blender-mcp
python3 -m blender_mcp.server
添加到 claude_desktop_config.json:
{
"mcpServers": {
"blender": {
"command": "uvx",
"args": ["blender-mcp"]
}
}
}
get_scene_info获取当前 Blender 场景的详细信息。
get_object_info获取特定对象的详细信息。
object_name (必需): 对象名称execute_code在 Blender 中执行任意 Python 代码。
code (必需): 要执行的 Python 代码execute_code,不是 execute_blender_codeget_viewport_screenshot捕获当前 3D 视口的截图。
max_size (可选): 最大像素尺寸 (默认: 800)get_polyhaven_status检查 PolyHaven 集成是否启用。
get_polyhaven_categories获取可用的资产类别。
asset_type: "hdris" | "textures" | "models" | "all"search_polyhaven_assets搜索 PolyHaven 资产。
asset_type: 要搜索的资产类型categories: 可选的逗号分隔类别过滤器download_polyhaven_asset下载并导入 PolyHaven 资产。
asset_id (必需): 资产标识符asset_type (必需): "hdris" | "textures" | "models"resolution (可选): "1k" | "2k" | "4k" (默认: "1k")file_format (可选): 文件格式偏好set_texture将下载的纹理应用到对象。
object_name (必需): 目标对象texture_id (必需): PolyHaven 纹理 IDget_sketchfab_status检查 Sketchfab 集成是否启用。
search_sketchfab_models在 Sketchfab 中搜索 3D 模型。
query (必需): 搜索文本categories (可选): 类别过滤器count (可选): 最大结果数 (默认: 20)downloadable (可选): 仅可下载模型 (默认: true)download_sketchfab_model下载并导入 Sketchfab 模型。
uid (必需): Sketchfab 模型 UIDget_hyper3d_status检查 Hyper3D Rodin 是否启用。
generate_hyper3d_model_via_text从文本描述生成 3D 模型。
text_prompt (必需): 英文描述bbox_condition (可选): [长, 宽, 高] 比例generate_hyper3d_model_via_images从参考图像生成 3D 模型。
input_image_paths (可选): 图像文件路径列表input_image_urls (可选): 图像 URL 列表bbox_condition (可选): 尺寸比例poll_rodin_job_status检查生成任务状态。
subscription_key: 用于 MAIN_SITE 模式request_id: 用于 FAL_AI 模式import_generated_asset导入完成的 Hyper3D 模型。
name (必需): 场景中的对象名称task_uuid: 用于 MAIN_SITE 模式request_id: 用于 FAL_AI 模式get_hunyuan3d_status检查 Hunyuan3D 是否启用。
generate_hunyuan3d_model使用 Hunyuan3D 生成 3D 模型。
text_prompt (可选): 文本描述input_image_url (可选): 参考图像 URLpoll_hunyuan_job_status检查 Hunyuan3D 任务状态。
job_id (必需): 任务标识符import_generated_asset_hunyuan导入完成的 Hunyuan3D 模型。
name (必需): 对象名称zip_file_url (必需): 生成的模型 URLUse the Blender skill to:
1. Check PolyHaven status
2. Download an HDRI for environment lighting
3. Download a wood texture
4. Create a cube and apply the wood texture
5. Render the scene
Search Sketchfab for "vintage car" and import the first downloadable result.
Scale it to fit the scene and position at origin.
Generate a 3D model of "a small wooden treasure chest with gold trim"
using Hyper3D Rodin. Import it and add environment lighting.
# Create a grid of cubes with random colors
import bpy
import random
for x in range(-5, 6):
for y in range(-5, 6):
bpy.ops.mesh.primitive_cube_add(location=(x*2, y*2, 0))
obj = bpy.context.active_object
mat = bpy.data.materials.new(name=f"Mat_{x}_{y}")
mat.diffuse_color = (random.random(), random.random(), random.random(), 1)
obj.data.materials.append(mat)
对于自动化流水线,使用独立服务器而非 UI 插件:
# 使用 VNC 显示启动 (保持 Blender 窗口打开)
DISPLAY=:1 blender --python scripts/standalone_server.py
# 或使用直接 Python 调用
python3 -c "
import socket
import json
s = socket.socket()
s.connect(('localhost', 9876))
# Import a GLB mesh
request = json.dumps({
'type': 'import_model',
'params': {'filepath': '/path/to/mesh.glb', 'name': 'MyModel'}
})
s.sendall(request.encode())
print(s.recv(4096).decode())
s.close()
"
| 命令 | 描述 |
|---|---|
get_scene_info | 获取场景详情和对象列表 |
get_object_info | 获取特定对象属性 |
execute_blender_code | 运行任意 Python 代码 |
import_model | 导入 GLB/OBJ/FBX/STL/PLY 文件 |
render | 渲染到图像文件 |
orbit_render | 从多个环绕角度渲染 |
可与以下技能良好协作:
comfyui 技能,用于文本到 3D 模型生成和验证filesystem 技能,用于管理输出文件imagemagick 技能,用于后处理渲染Blender 技能是 ComfyUI 文本到 3D 流水线中的最终验证步骤:
/free 端点释放 GPU 内存关键 : 在大多数 GPU 上,FLUX2 和 SAM3D 无法同时运行。使用拆分工作流:
# FLUX2 生成后,在 SAM3D 之前释放 GPU 内存
curl -X POST http://comfyui:8188/free \
-H "Content-Type: application/json" \
-d '{"unload_models": true, "free_memory": true}'
import requests
import json
import socket
import time
COMFYUI_URL = "http://192.168.0.51:8188"
# Phase 1: FLUX2 Image Generation
flux2_workflow = {
"86": {"inputs": {"unet_name": "flux2_dev_fp8mixed.safetensors"}, "class_type": "UNETLoader"},
# ... rest of FLUX2 workflow
}
response = requests.post(f"{COMFYUI_URL}/prompt", json={"prompt": flux2_workflow})
prompt_id = response.json()["prompt_id"]
# Wait for completion
while True:
history = requests.get(f"{COMFYUI_URL}/history/{prompt_id}").json()
if history.get(prompt_id, {}).get("status", {}).get("completed"):
break
time.sleep(5)
# Free GPU memory
requests.post(f"{COMFYUI_URL}/free", json={"unload_models": True, "free_memory": True})
# Phase 2: SAM3D Reconstruction
sam3d_workflow = {
"44": {"inputs": {"model_tag": "hf"}, "class_type": "LoadSAM3DModel"},
# ... rest of SAM3D workflow
}
response = requests.post(f"{COMFYUI_URL}/prompt", json={"prompt": sam3d_workflow})
# Phase 3: Blender Import and Validation
s = socket.socket()
s.connect(('localhost', 9876))
s.sendall(json.dumps({
"type": "import_model",
"params": {"filepath": "/path/to/mesh.glb", "name": "GeneratedModel"}
}).encode())
print(s.recv(4096).decode())
# Orbit render for validation - render to ComfyUI output for visibility
s.sendall(json.dumps({
"type": "orbit_render",
"params": {
"output_dir": "/root/ComfyUI/output/validation", # ComfyUI output
"prefix": "blender_validation",
"angles": [0, 45, 90, 135, 180, 225, 270, 315],
"elevation": 30,
"resolution": 512
}
}).encode())
print(s.recv(4096).decode())
s.close()
为了在 ComfyUI 网络界面中可见,直接渲染到 ComfyUI 输出目录:
# 如果 BlenderMCP 在具有共享卷的 Docker 内:
output_dir = "/root/ComfyUI/output/validation"
# 如果 Blender 在主机上,渲染后复制:
import shutil
import subprocess
# 从本地复制到 ComfyUI 容器
subprocess.run([
"docker", "cp",
"/tmp/validation/.",
"comfyui:/root/ComfyUI/output/validation/"
])
或使用 docker exec 在容器网络内复制:
# 将渲染复制到 ComfyUI 输出 (从主机)
docker cp /tmp/validation/. comfyui:/root/ComfyUI/output/validation/
# 或者如果两者在同一网络的容器中:
docker exec comfyui mkdir -p /root/ComfyUI/output/validation
docker cp blender:/tmp/renders/. comfyui:/root/ComfyUI/output/validation/
该技能处理以下情况:
创建 3D 内容的推荐优先级:
# 检查 Blender 插件服务器是否在运行
nc -zv localhost 9876
# 重启 Blender 并启用插件
# 在侧边栏中查找 BlenderMCP 面板 (按 N 键)
对于 Sketchfab 和 Hyper3D 功能,请在 Blender 内的 BlenderMCP 插件面板中配置 API 密钥。
确保 VNC 已连接以获取视觉反馈:
# 检查 VNC 状态
vncserver -list
每周安装次数
–
来源
首次出现
–
This skill enables Claude to interact with Blender for 3D modeling, scene manipulation, material application, and rendering operations through the BlenderMCP addon (ahujasid/blender-mcp - 14k+ stars).
The addon is automatically installed when building with Dockerfile.unified:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
set_camera| 将相机定位在位置/目标处 |
add_hdri | 添加 HDRI 环境光照 |
~/.config/blender/{4.0,4.1,4.2}/scripts/addons/uvx blender-mcpaddon/blender_mcp_addon.py from this skill directory# Linux
cp addon/blender_mcp_addon.py ~/.config/blender/4.0/scripts/addons/
# macOS
cp addon/blender_mcp_addon.py ~/Library/Application\ Support/Blender/4.0/scripts/addons/
# Windows
copy addon\blender_mcp_addon.py %APPDATA%\Blender Foundation\Blender\4.0\scripts\addons\
python3 addon/install-addon.py --blender-version 4.0
# Test that Blender is listening
nc -zv localhost 9876
# Or send a test command
echo '{"type":"get_scene_info","params":{}}' | nc localhost 9876
uvx blender-mcp
python3 -m blender_mcp.server
Add to claude_desktop_config.json:
{
"mcpServers": {
"blender": {
"command": "uvx",
"args": ["blender-mcp"]
}
}
}
get_scene_infoGet detailed information about the current Blender scene.
get_object_infoGet detailed information about a specific object.
object_name (required): Name of the objectexecute_codeExecute arbitrary Python code in Blender.
code (required): Python code to executeexecute_code, NOT execute_blender_codeget_viewport_screenshotCapture a screenshot of the current 3D viewport.
max_size (optional): Maximum size in pixels (default: 800)get_polyhaven_statusCheck if PolyHaven integration is enabled.
get_polyhaven_categoriesGet available asset categories.
asset_type: "hdris" | "textures" | "models" | "all"search_polyhaven_assetsSearch for PolyHaven assets.
asset_type: Type of assets to searchcategories: Optional comma-separated category filterdownload_polyhaven_assetDownload and import a PolyHaven asset.
asset_id (required): Asset identifierasset_type (required): "hdris" | "textures" | "models"resolution (optional): "1k" | "2k" | "4k" (default: "1k")file_format (optional): File format preferenceset_textureApply a downloaded texture to an object.
object_name (required): Target objecttexture_id (required): PolyHaven texture IDget_sketchfab_statusCheck if Sketchfab integration is enabled.
search_sketchfab_modelsSearch Sketchfab for 3D models.
query (required): Search textcategories (optional): Category filtercount (optional): Max results (default: 20)downloadable (optional): Only downloadable models (default: true)download_sketchfab_modelDownload and import a Sketchfab model.
uid (required): Sketchfab model UIDget_hyper3d_statusCheck if Hyper3D Rodin is enabled.
generate_hyper3d_model_via_textGenerate 3D model from text description.
text_prompt (required): Description in Englishbbox_condition (optional): [Length, Width, Height] ratiogenerate_hyper3d_model_via_imagesGenerate 3D model from reference images.
input_image_paths (optional): List of image file pathsinput_image_urls (optional): List of image URLsbbox_condition (optional): Size ratiopoll_rodin_job_statusCheck generation task status.
subscription_key: For MAIN_SITE moderequest_id: For FAL_AI modeimport_generated_assetImport completed Hyper3D model.
name (required): Object name in scenetask_uuid: For MAIN_SITE moderequest_id: For FAL_AI modeget_hunyuan3d_statusCheck if Hunyuan3D is enabled.
generate_hunyuan3d_modelGenerate 3D model using Hunyuan3D.
text_prompt (optional): Text descriptioninput_image_url (optional): Reference image URLpoll_hunyuan_job_statusCheck Hunyuan3D task status.
job_id (required): Job identifierimport_generated_asset_hunyuanImport completed Hunyuan3D model.
name (required): Object namezip_file_url (required): Generated model URLUse the Blender skill to:
1. Check PolyHaven status
2. Download an HDRI for environment lighting
3. Download a wood texture
4. Create a cube and apply the wood texture
5. Render the scene
Search Sketchfab for "vintage car" and import the first downloadable result.
Scale it to fit the scene and position at origin.
Generate a 3D model of "a small wooden treasure chest with gold trim"
using Hyper3D Rodin. Import it and add environment lighting.
# Create a grid of cubes with random colors
import bpy
import random
for x in range(-5, 6):
for y in range(-5, 6):
bpy.ops.mesh.primitive_cube_add(location=(x*2, y*2, 0))
obj = bpy.context.active_object
mat = bpy.data.materials.new(name=f"Mat_{x}_{y}")
mat.diffuse_color = (random.random(), random.random(), random.random(), 1)
obj.data.materials.append(mat)
For automated pipelines, use the standalone server instead of the UI addon:
# Start with VNC display (keeps Blender window open)
DISPLAY=:1 blender --python scripts/standalone_server.py
# Or with direct Python call
python3 -c "
import socket
import json
s = socket.socket()
s.connect(('localhost', 9876))
# Import a GLB mesh
request = json.dumps({
'type': 'import_model',
'params': {'filepath': '/path/to/mesh.glb', 'name': 'MyModel'}
})
s.sendall(request.encode())
print(s.recv(4096).decode())
s.close()
"
| Command | Description |
|---|---|
get_scene_info | Get scene details and object list |
get_object_info | Get specific object properties |
execute_blender_code | Run arbitrary Python code |
import_model | Import GLB/OBJ/FBX/STL/PLY files |
render | Render to image file |
orbit_render | Render from multiple orbit angles |
set_camera | Position camera at location/target |
add_hdri | Add HDRI environment lighting |
Works well with:
comfyui skill for text-to-3D model generation and validationfilesystem skill for managing output filesimagemagick skill for post-processing rendersThe Blender skill is the final validation step in the ComfyUI text-to-3D pipeline:
/free endpointCritical : FLUX2 and SAM3D cannot run concurrently on most GPUs. Use split workflow:
# After FLUX2 generation, free GPU memory before SAM3D
curl -X POST http://comfyui:8188/free \
-H "Content-Type: application/json" \
-d '{"unload_models": true, "free_memory": true}'
import requests
import json
import socket
import time
COMFYUI_URL = "http://192.168.0.51:8188"
# Phase 1: FLUX2 Image Generation
flux2_workflow = {
"86": {"inputs": {"unet_name": "flux2_dev_fp8mixed.safetensors"}, "class_type": "UNETLoader"},
# ... rest of FLUX2 workflow
}
response = requests.post(f"{COMFYUI_URL}/prompt", json={"prompt": flux2_workflow})
prompt_id = response.json()["prompt_id"]
# Wait for completion
while True:
history = requests.get(f"{COMFYUI_URL}/history/{prompt_id}").json()
if history.get(prompt_id, {}).get("status", {}).get("completed"):
break
time.sleep(5)
# Free GPU memory
requests.post(f"{COMFYUI_URL}/free", json={"unload_models": True, "free_memory": True})
# Phase 2: SAM3D Reconstruction
sam3d_workflow = {
"44": {"inputs": {"model_tag": "hf"}, "class_type": "LoadSAM3DModel"},
# ... rest of SAM3D workflow
}
response = requests.post(f"{COMFYUI_URL}/prompt", json={"prompt": sam3d_workflow})
# Phase 3: Blender Import and Validation
s = socket.socket()
s.connect(('localhost', 9876))
s.sendall(json.dumps({
"type": "import_model",
"params": {"filepath": "/path/to/mesh.glb", "name": "GeneratedModel"}
}).encode())
print(s.recv(4096).decode())
# Orbit render for validation - render to ComfyUI output for visibility
s.sendall(json.dumps({
"type": "orbit_render",
"params": {
"output_dir": "/root/ComfyUI/output/validation", # ComfyUI output
"prefix": "blender_validation",
"angles": [0, 45, 90, 135, 180, 225, 270, 315],
"elevation": 30,
"resolution": 512
}
}).encode())
print(s.recv(4096).decode())
s.close()
For visibility in the ComfyUI web interface, render directly to the ComfyUI output directory:
# If BlenderMCP is inside Docker with shared volumes:
output_dir = "/root/ComfyUI/output/validation"
# If Blender is on host, copy after rendering:
import shutil
import subprocess
# Copy from local to ComfyUI container
subprocess.run([
"docker", "cp",
"/tmp/validation/.",
"comfyui:/root/ComfyUI/output/validation/"
])
Or use docker exec to copy within the container network:
# Copy renders to ComfyUI output (from host)
docker cp /tmp/validation/. comfyui:/root/ComfyUI/output/validation/
# Or if both are containers in same network:
docker exec comfyui mkdir -p /root/ComfyUI/output/validation
docker cp blender:/tmp/renders/. comfyui:/root/ComfyUI/output/validation/
The skill handles:
Recommended priority for creating 3D content:
# Check if Blender addon server is running
nc -zv localhost 9876
# Restart Blender and enable addon
# Look for BlenderMCP panel in sidebar (press N)
For Sketchfab and Hyper3D features, configure API keys in the BlenderMCP addon panel within Blender.
Ensure VNC is connected for visual feedback:
# Check VNC status
vncserver -list
Weekly Installs
–
Source
First Seen
–
shadcn/ui 框架:React 组件库与 UI 设计系统,Tailwind CSS 最佳实践
54,000 周安装