comfyui-workflow-builder by mckruz/comfyui-expert
npx skills add https://github.com/mckruz/comfyui-expert --skill comfyui-workflow-builder将自然语言请求转换为可执行的 ComfyUI 工作流 JSON。在生成前始终根据清单进行验证。
将用户意图解析为:
读取 state/inventory.json 以确定:
基于请求和清单,从以下模式中选择:
| 模式 | 适用场景 | 关键节点 |
|---|---|---|
| 文本到图像 | 简单生成 | Checkpoint → CLIP → KSampler → VAE |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 身份保留图像 | 角色一致性 | + InstantID/PuLID/IP-Adapter |
| LoRA 角色 | 训练过的角色 | + LoRA Loader |
| 图像到视频(Wan) | 高质量视频 | Diffusion Model → Wan I2V → Video Combine |
| 图像到视频(AnimateDiff) | 快速视频,运动控制 | + AnimateDiff Loader + Motion LoRAs |
| 说话头部 | 角色说话 | Image → Video → Voice → Lip-Sync |
| 超分辨率 | 增强分辨率 | Image → UltimateSDUpscale → Save |
| 局部重绘 | 编辑区域 | Image + Mask → Inpaint Model → KSampler |
ComfyUI 工作流格式:
{
"{node_id}": {
"class_type": "{NodeClassName}",
"inputs": {
"{param_name}": "{value}",
"{connected_param}": ["{source_node_id}", {output_index}]
}
}
}
规则:
["source_node_id", output_index]在呈现给用户之前:
class_type 都存在于清单的节点列表中如果是在线模式 :通过 comfyui-api 技能排队处理
如果是离线模式 :将 JSON 保存到 projects/{project}/workflows/ 并附带描述性名称
{
"1": {
"class_type": "LoadCheckpoint",
"inputs": {"ckpt_name": "flux1-dev.safetensors"}
},
"2": {
"class_type": "CLIPTextEncode",
"inputs": {"text": "{positive_prompt}", "clip": ["1", 1]}
},
"3": {
"class_type": "CLIPTextEncode",
"inputs": {"text": "{negative_prompt}", "clip": ["1", 1]}
},
"4": {
"class_type": "EmptyLatentImage",
"inputs": {"width": 1024, "height": 1024, "batch_size": 1}
},
"5": {
"class_type": "KSampler",
"inputs": {
"seed": 42,
"steps": 25,
"cfg": 3.5,
"sampler_name": "euler",
"scheduler": "normal",
"denoise": 1.0,
"model": ["1", 0],
"positive": ["2", 0],
"negative": ["3", 0],
"latent_image": ["4", 0]
}
},
"6": {
"class_type": "VAEDecode",
"inputs": {"samples": ["5", 0], "vae": ["1", 2]}
},
"7": {
"class_type": "SaveImage",
"inputs": {"filename_prefix": "output", "images": ["6", 0]}
}
}
通过添加以下节点扩展基础模板:
完整的节点设置请参见 references/workflows.md。
使用不同的加载器链:
完整的设置请参见 references/workflows.md 中的工作流 4。
| 组件 | 近似 VRAM |
|---|---|
| FLUX FP16 | 16GB |
| FLUX FP8 | 8GB |
| SDXL | 6GB |
| SD1.5 | 4GB |
| InstantID | +4GB |
| IP-Adapter | +2GB |
| ControlNet(每个) | +1.5GB |
| Wan 14B | 20GB |
| Wan 1.3B | 5GB |
| AnimateDiff | +3GB |
| FaceDetailer | +2GB |
[0, 1, 2] 处输出 [model, clip, vae]ae.safetensors)LoadDiffusionModel,而不是 LoadCheckpointreferences/workflows.md - 详细的逐节点模板references/models.md - 模型文件和路径references/prompt-templates.md - 模型特定的提示词state/inventory.json - 当前清单缓存每周安装量
259
代码仓库
GitHub 星标数
26
首次出现
2026年2月24日
安全审计
安装于
opencode255
gemini-cli253
github-copilot253
codex253
amp253
kimi-cli253
Translates natural language requests into executable ComfyUI workflow JSON. Always validates against inventory before generating.
Parse the user's intent into:
Read state/inventory.json to determine:
Based on request + inventory, choose from:
| Pattern | When | Key Nodes |
|---|---|---|
| Text-to-Image | Simple generation | Checkpoint → CLIP → KSampler → VAE |
| Identity-Preserved Image | Character consistency | + InstantID/PuLID/IP-Adapter |
| LoRA Character | Trained character | + LoRA Loader |
| Image-to-Video (Wan) | High-quality video | Diffusion Model → Wan I2V → Video Combine |
| Image-to-Video (AnimateDiff) | Fast video, motion control | + AnimateDiff Loader + Motion LoRAs |
| Talking Head | Character speaks | Image → Video → Voice → Lip-Sync |
| Upscale | Enhance resolution | Image → UltimateSDUpscale → Save |
| Inpainting | Edit regions | Image + Mask → Inpaint Model → KSampler |
ComfyUI workflow format:
{
"{node_id}": {
"class_type": "{NodeClassName}",
"inputs": {
"{param_name}": "{value}",
"{connected_param}": ["{source_node_id}", {output_index}]
}
}
}
Rules:
["source_node_id", output_index]Before presenting to user:
class_type exists in inventory's node listIf online mode : Queue via comfyui-api skill If offline mode : Save JSON to projects/{project}/workflows/ with descriptive name
{
"1": {
"class_type": "LoadCheckpoint",
"inputs": {"ckpt_name": "flux1-dev.safetensors"}
},
"2": {
"class_type": "CLIPTextEncode",
"inputs": {"text": "{positive_prompt}", "clip": ["1", 1]}
},
"3": {
"class_type": "CLIPTextEncode",
"inputs": {"text": "{negative_prompt}", "clip": ["1", 1]}
},
"4": {
"class_type": "EmptyLatentImage",
"inputs": {"width": 1024, "height": 1024, "batch_size": 1}
},
"5": {
"class_type": "KSampler",
"inputs": {
"seed": 42,
"steps": 25,
"cfg": 3.5,
"sampler_name": "euler",
"scheduler": "normal",
"denoise": 1.0,
"model": ["1", 0],
"positive": ["2", 0],
"negative": ["3", 0],
"latent_image": ["4", 0]
}
},
"6": {
"class_type": "VAEDecode",
"inputs": {"samples": ["5", 0], "vae": ["1", 2]}
},
"7": {
"class_type": "SaveImage",
"inputs": {"filename_prefix": "output", "images": ["6", 0]}
}
}
Extends basic template by adding:
See references/workflows.md for complete node settings.
Uses different loader chain:
See references/workflows.md Workflow 4 for complete settings.
| Component | Approximate VRAM |
|---|---|
| FLUX FP16 | 16GB |
| FLUX FP8 | 8GB |
| SDXL | 6GB |
| SD1.5 | 4GB |
| InstantID | +4GB |
| IP-Adapter | +2GB |
| ControlNet (each) | +1.5GB |
| Wan 14B | 20GB |
| Wan 1.3B | 5GB |
| AnimateDiff | +3GB |
| FaceDetailer | +2GB |
[model, clip, vae] at indices [0, 1, 2]ae.safetensors)LoadDiffusionModel, not LoadCheckpointreferences/workflows.md - Detailed node-by-node templatesreferences/models.md - Model files and pathsreferences/prompt-templates.md - Model-specific promptsstate/inventory.json - Current inventory cacheWeekly Installs
259
Repository
GitHub Stars
26
First Seen
Feb 24, 2026
Security Audits
Gen Agent Trust HubPassSocketFailSnykPass
Installed on
opencode255
gemini-cli253
github-copilot253
codex253
amp253
kimi-cli253
AI Elements:基于shadcn/ui的AI原生应用组件库,快速构建对话界面
56,200 周安装