alicloud-ai-video-wan-video by cinience/alicloud-skills
npx skills add https://github.com/cinience/alicloud-skills --skill alicloud-ai-video-wan-videoCategory: provider
mkdir -p output/alicloud-ai-video-wan-video
python -m py_compile skills/ai/video/alicloud-ai-video-wan-video/scripts/generate_video.py && echo "py_compile_ok" > output/alicloud-ai-video-wan-video/validate.txt
通过标准:命令退出码为 0 且生成 output/alicloud-ai-video-wan-video/validate.txt 文件。
output/alicloud-ai-video-wan-video/。通过标准化 video.generate 的输入/输出并使用 DashScope SDK (Python) 及确切的模型名称,为视频代理流水线提供一致的视频生成行为。
使用以下确切的模型字符串之一:
wan2.6-i2v-flash广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
wan2.6-i2vwan2.6-i2v-us安装 SDK(建议在虚拟环境中安装以避免 PEP 668 限制):
python3 -m venv .venv . .venv/bin/activate python -m pip install dashscope
在环境中设置 DASHSCOPE_API_KEY,或将 dashscope_api_key 添加到 ~/.alibabacloud/credentials 文件中(环境变量优先)。
prompt (字符串, 必需)negative_prompt (字符串, 可选)duration (数字, 必需) 秒fps (数字, 必需)size (字符串, 必需) 例如 1280*720seed (整数, 可选)reference_image (字符串 | 字节, i2v 系列模型必需)motion_strength (数字, 可选)video_url (字符串)duration (数字)fps (数字)seed (整数)视频生成通常是异步的。预期会返回一个任务 ID,然后轮询直到完成。注意:万相 i2v 模型需要一个输入图像;将 reference_image 映射到 img_url。
import os
from dashscope import VideoSynthesis
# 建议使用环境变量进行认证:export DASHSCOPE_API_KEY=...
# 或者在 ~/.alibabacloud/credentials 文件的 [default] 部分下使用 dashscope_api_key。
def generate_video(req: dict) -> dict:
payload = {
"model": req.get("model", "wan2.6-i2v-flash"),
"prompt": req["prompt"],
"negative_prompt": req.get("negative_prompt"),
"duration": req.get("duration", 4),
"fps": req.get("fps", 24),
"size": req.get("size", "1280*720"),
"seed": req.get("seed"),
"motion_strength": req.get("motion_strength"),
"api_key": os.getenv("DASHSCOPE_API_KEY"),
}
if req.get("reference_image"):
# DashScope 期望 i2v 模型使用 img_url;本地文件会自动上传。
payload["img_url"] = req["reference_image"]
response = VideoSynthesis.call(**payload)
# 某些 SDK 版本需要轮询获取最终结果。
# 如果返回了 task_id,则轮询直到状态为 SUCCEEDED。
result = response.output.get("results", [None])[0]
return {
"video_url": None if not result else result.get("url"),
"duration": response.output.get("duration"),
"fps": response.output.get("fps"),
"seed": response.output.get("seed"),
}
import os
from dashscope import VideoSynthesis
task = VideoSynthesis.async_call(
model=req.get("model", "wan2.6-i2v-flash"),
prompt=req["prompt"],
img_url=req["reference_image"],
duration=req.get("duration", 4),
fps=req.get("fps", 24),
size=req.get("size", "1280*720"),
api_key=os.getenv("DASHSCOPE_API_KEY"),
)
final = VideoSynthesis.wait(task)
video_url = final.output.get("video_url")
(prompt, negative_prompt, duration, fps, size, seed, reference_image hash, motion_strength) 进行缓存。reference_image 可以是 URL 或本地路径;SDK 会自动上传本地文件。Field required: input.img_url 错误,说明参考图像缺失或未正确映射。宽x高 格式 (例如 1280*720)。output/alicloud-ai-video-wan-video/videos/OUTPUT_DIR 环境变量覆盖基础目录。references/api_reference.md。references/sources.md每周安装量
171
代码仓库
GitHub 星标数
337
首次出现
2026年2月7日
安全审计
安装于
gemini-cli170
github-copilot170
codex170
kimi-cli170
amp170
opencode170
Category: provider
mkdir -p output/alicloud-ai-video-wan-video
python -m py_compile skills/ai/video/alicloud-ai-video-wan-video/scripts/generate_video.py && echo "py_compile_ok" > output/alicloud-ai-video-wan-video/validate.txt
Pass criteria: command exits 0 and output/alicloud-ai-video-wan-video/validate.txt is generated.
output/alicloud-ai-video-wan-video/.Provide consistent video generation behavior for the video-agent pipeline by standardizing video.generate inputs/outputs and using DashScope SDK (Python) with the exact model name.
Use one of these exact model strings:
wan2.6-i2v-flashwan2.6-i2vwan2.6-i2v-usInstall SDK (recommended in a venv to avoid PEP 668 limits):
python3 -m venv .venv . .venv/bin/activate python -m pip install dashscope
Set DASHSCOPE_API_KEY in your environment, or add dashscope_api_key to ~/.alibabacloud/credentials (env takes precedence).
prompt (string, required)negative_prompt (string, optional)duration (number, required) secondsfps (number, required)size (string, required) e.g. 1280*720seed (int, optional)reference_image (string | bytes, required for i2v family models)motion_strength (number, optional)video_url (string)duration (number)fps (number)seed (int)Video generation is usually asynchronous. Expect a task ID and poll until completion. Note: Wan i2v models require an input image; map reference_image to img_url.
import os
from dashscope import VideoSynthesis
# Prefer env var for auth: export DASHSCOPE_API_KEY=...
# Or use ~/.alibabacloud/credentials with dashscope_api_key under [default].
def generate_video(req: dict) -> dict:
payload = {
"model": req.get("model", "wan2.6-i2v-flash"),
"prompt": req["prompt"],
"negative_prompt": req.get("negative_prompt"),
"duration": req.get("duration", 4),
"fps": req.get("fps", 24),
"size": req.get("size", "1280*720"),
"seed": req.get("seed"),
"motion_strength": req.get("motion_strength"),
"api_key": os.getenv("DASHSCOPE_API_KEY"),
}
if req.get("reference_image"):
# DashScope expects img_url for i2v models; local files are auto-uploaded.
payload["img_url"] = req["reference_image"]
response = VideoSynthesis.call(**payload)
# Some SDK versions require polling for the final result.
# If a task_id is returned, poll until status is SUCCEEDED.
result = response.output.get("results", [None])[0]
return {
"video_url": None if not result else result.get("url"),
"duration": response.output.get("duration"),
"fps": response.output.get("fps"),
"seed": response.output.get("seed"),
}
import os
from dashscope import VideoSynthesis
task = VideoSynthesis.async_call(
model=req.get("model", "wan2.6-i2v-flash"),
prompt=req["prompt"],
img_url=req["reference_image"],
duration=req.get("duration", 4),
fps=req.get("fps", 24),
size=req.get("size", "1280*720"),
api_key=os.getenv("DASHSCOPE_API_KEY"),
)
final = VideoSynthesis.wait(task)
video_url = final.output.get("video_url")
(prompt, negative_prompt, duration, fps, size, seed, reference_image hash, motion_strength).reference_image can be a URL or local path; the SDK auto-uploads local files.Field required: input.img_url, the reference image is missing or not mapped.WxH format (e.g. 1280*720).output/alicloud-ai-video-wan-video/videos/OUTPUT_DIR.See references/api_reference.md for DashScope SDK mapping and async handling notes.
Source list: references/sources.md
Weekly Installs
171
Repository
GitHub Stars
337
First Seen
Feb 7, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
gemini-cli170
github-copilot170
codex170
kimi-cli170
amp170
opencode170
Azure 配额管理指南:服务限制、容量验证与配额增加方法
79,700 周安装