alicloud-skill-creator by cinience/alicloud-skills
npx skills add https://github.com/cinience/alicloud-skills --skill alicloud-skill-creatorCategory: tool
专用于 alicloud-skills 仓库的技能工程工作流。
skills/** 下创建新技能时。name 和 description)。tests/** 下添加或修复冒烟测试。apps/ 下的应用程序代码,不涉及技能变更。skills/<domain>/<subdomain>/<skill-name>/ 下。广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
alicloud- 开头。name 和 description 的 SKILL.md frontmatter。skills/**/SKILL.md 的内容必须保持为英文。tests/<domain>/<subdomain>/<skill-name>-test/SKILL.md 中。output/<skill-or-test-skill>/ 下。scripts/update_skill_index.sh 刷新 README 索引。skills/<domain>/<subdomain>/<skill-name>/
├── SKILL.md
├── agents/openai.yaml
├── references/
│ └── sources.md
└── scripts/ (optional)
tests/<domain>/<subdomain>/<skill-name>-test/
└── SKILL.md
SKILL.md + agents/openai.yaml。scripts/、references/、assets/)。tests/**/<skill-name>-test/SKILL.md。运行技能脚本编译验证:
python3 tests/common/compile_skill_scripts.py \
--skill-path skills/<domain>/<subdomain>/<skill-name> \
--output output/<skill-name>-test/compile-check.json
当清单变更时刷新技能索引:
scripts/update_skill_index.sh
确认索引存在:
rg -n "<skill-name>" README.md README.zh-CN.md README.zh-TW.md
可选更广泛的检查:
make test
make build-cli
5. 基准测试循环(可选,用于主要技能)
如果用户要求定量技能评估,复用捆绑的工具:
scripts/run_eval.pyscripts/aggregate_benchmark.pyeval-viewer/generate_review.py优先将基准测试工件放置在相邻的工作区目录中,并保留每次迭代的输出。
output/ 下。references/schemas.mdreferences/sources.md每周安装次数
18
仓库
GitHub 星标数
337
首次出现
3 天前
安全审计
安装于
qoder18
claude-code18
github-copilot18
codex18
amp18
cline18
Category: tool
Repository-specific skill engineering workflow for alicloud-skills.
skills/**.name and description in frontmatter).tests/**.apps/ with no skill changes.skills/<domain>/<subdomain>/<skill-name>/.alicloud-.SKILL.md frontmatter with name and description.skills/**/SKILL.md content must stay English-only.tests/<domain>/<subdomain>/<skill-name>-test/SKILL.md.output/<skill-or-test-skill>/ only.scripts/update_skill_index.sh.skills/<domain>/<subdomain>/<skill-name>/
├── SKILL.md
├── agents/openai.yaml
├── references/
│ └── sources.md
└── scripts/ (optional)
tests/<domain>/<subdomain>/<skill-name>-test/
└── SKILL.md
SKILL.md + agents/openai.yaml.scripts/, references/, assets/).tests/**/<skill-name>-test/SKILL.md.Run script compile validation for the skill:
python3 tests/common/compile_skill_scripts.py \
--skill-path skills/<domain>/<subdomain>/<skill-name> \
--output output/<skill-name>-test/compile-check.json
Refresh skill index when inventory changed:
scripts/update_skill_index.sh
Confirm index presence:
rg -n "<skill-name>" README.md README.zh-CN.md README.zh-TW.md
Optional broader checks:
make test
make build-cli
5. Benchmark loop (optional, for major skills)
If the user asks for quantitative skill evaluation, reuse bundled tooling:
scripts/run_eval.pyscripts/aggregate_benchmark.pyeval-viewer/generate_review.pyPrefer placing benchmark artifacts in a sibling workspace directory and keep per-iteration outputs.
output/.references/schemas.mdreferences/sources.mdWeekly Installs
18
Repository
GitHub Stars
337
First Seen
3 days ago
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
qoder18
claude-code18
github-copilot18
codex18
amp18
cline18