analyze-test-run by microsoft/github-copilot-for-azure
npx skills add https://github.com/microsoft/github-copilot-for-azure --skill analyze-test-run从 GitHub Actions 集成测试运行中下载制品,生成汇总的技能调用报告,并为每个测试失败创建包含根本原因分析的 GitHub issue。
| 参数 | 必填 | 描述 |
|---|---|---|
| 运行 ID 或 URL | 是 | GitHub Actions 运行 ID(例如 22373768875)或完整 URL |
| 对比运行 | 否 | 用于并排比较的第二个运行 ID/URL |
所有工具都使用 owner: "microsoft" 和 repo: "GitHub-Copilot-for-Azure" 作为固定参数。 用于选择工具内的操作。
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
method| 工具 | method | 关键参数 | 用途 |
|---|---|---|---|
actions_get | get_workflow_run | resource_id: 运行 ID | 获取运行状态和元数据 |
actions_list | list_workflow_run_artifacts | resource_id: 运行 ID | 列出运行的所有制品 |
actions_get | download_workflow_run_artifact | resource_id: 制品 ID | 获取制品 ZIP 文件的临时下载 URL |
get_job_logs | — | run_id + failed_only: true | 当制品内容无法访问时,检索作业日志 |
search_issues | — | query: 搜索字符串 | 在创建新 issue 前查找现有的未关闭 issue |
create_issue | — | title, body, labels, assignees | 为测试失败创建一个新的 GitHub issue |
从输入中提取数字运行 ID(如果需要,去除 URL 前缀)
使用 MCP actions_get 工具获取运行元数据:
actions_get({ method: "get_workflow_run", owner: "microsoft", repo: "GitHub-Copilot-for-Azure", resource_id: "<run-id>" })
使用 MCP actions_list 工具列出制品,然后下载每个相关制品:
// 列出制品 actions_list({ method: "list_workflow_run_artifacts", owner: "microsoft", repo: "GitHub-Copilot-for-Azure", resource_id: "<run-id>" }) // 通过 ID 下载单个制品 actions_get({ method: "download_workflow_run_artifact", owner: "microsoft", repo: "GitHub-Copilot-for-Azure", resource_id: "<artifact-id>" })
下载操作返回一个临时 URL。从该 URL 获取 ZIP 归档文件并在本地解压。如果环境限制出站 HTTP 请求(例如 AWF 沙箱),则在分析报告中记录制品内容不可用,并通过 get_job_logs MCP 工具回退到使用作业日志。
在下载的制品中定位以下文件:
junit.xml — 测试通过/失败/跳过/错误的结果*-SKILL-REPORT.md — 生成的包含每个测试详细信息的技能报告agent-metadata-*.md 文件 — 每个测试的原始代理会话日志⚠️ 注意: 如果由于网络限制无法下载制品 ZIP 文件,或者下载的文件无法解压,请使用
get_job_logsMCP 工具来识别测试失败,并根据可访问的任何数据尽力生成分析。
生成一个包含四个部分的 Markdown 报告。确切的模板请参见 report-format.md。
第 1 部分 — 测试结果概览
解析 junit.xml 以构建:
| 指标 | 值 |
|---|---|
| 总测试数 | 来自 <testsuites tests=…> 的计数 |
| 已执行 | 总数 − 跳过数 |
| 已跳过 | <skipped/> 元素的计数 |
| 已通过 | 已执行 − 失败数 − 错误数 |
| 已失败 | <failure> 元素的计数 |
| 测试通过率 | 已通过 / 已执行,以 % 表示 |
包含一个按测试的表格,包含名称、持续时间(来自 time 属性,将秒转换为 Xm Ys)以及通过/失败结果。
第 2 部分 — 技能调用率
读取 SKILL-REPORT.md 中的“每个测试用例结果”部分。对于每个已执行的测试,确定被测技能是否被调用。
要跟踪的技能取决于运行属于哪个集成测试套件:
azure-deploy 集成测试 — 跟踪完整的部署链:
| 技能 | 如何检测 |
|---|---|
azure-prepare | 在叙述或 agent-metadata 中提及被调用 |
azure-validate | 在叙述或 agent-metadata 中提及被调用 |
azure-deploy | 在叙述或 agent-metadata 中提及被调用 |
构建一个按测试的调用矩阵(每个技能是/否)并计算比率:
| 技能 | 调用率 |
|---|---|
| azure-deploy | X% (n/总数) |
| azure-prepare | X% (n/总数) |
| azure-validate | X% (n/总数) |
| 完整技能链 (P→V→D) | X% (n/总数) |
azure-deploy 集成测试演练完整的部署工作流,预期代理会依次调用 azure-prepare、azure-validate 和 azure-deploy。这种三技能链跟踪仅适用于 azure-deploy 测试。
所有其他集成测试 — 仅跟踪被测技能:
| 技能 | 调用率 |
|---|---|
| {被测技能} | X% (n/总数) |
对于非部署测试(例如 azure-prepare、azure-ai、azure-kusto),仅跟踪主要被测技能是否被调用。不要包含 azure-prepare/azure-validate/azure-deploy 链列。
第 3 部分 — 报告置信度与通过率
从 SKILL-REPORT.md 中提取:
第 4 部分 — 对比(仅在提供了第二个运行时)
对第二个运行重复阶段 1-3,然后生成一个并排的差异表。参见 report-format.md § Comparison。
对于技能调用成功率可用且低于 80% 的情况,使用 create_issue MCP 工具创建一个 GitHub issue,分配与技能同名的标签,并根据其所属的技能将其分配给 .github/CODEOWNERS 文件中列出的代码所有者:
create_issue({
owner: "microsoft", repo: "GitHub-Copilot-for-Azure",
title: "Integration test failure: <skill> – skill-invocation",
labels: ["bug", "integration-test", "test-failure", "skill-invocation", "<skill>"],
body: "<body>",
assignees: ["<codeowners>"]
})
Issue 正文模板 — 参见 issue-template.md。
对于 junit.xml 中每个包含 <failure> 元素的测试:
从 XML 中读取失败消息和文件:行号
从该位置的测试文件中读取实际的代码行
从制品中读取该测试的 agent-metadata-*.md
在 SKILL-REPORT.md 中读取相应部分以了解代理操作的上下文
确定根本原因类别:
在创建新 issue 之前,使用 search_issues MCP 工具搜索现有的未关闭 issue:
search_issues({ owner: "microsoft", repo: "GitHub-Copilot-for-Azure", query: "Integration test failure: {skill} in:title is:open" })
匹配标准:一个未关闭的 issue,其标题和正文描述了类似的问题。如果找到匹配项,则跳过为此失败创建 issue,并在汇总报告中注明现有的 issue 编号。
7. 如果未找到现有 issue,则使用 create_issue MCP 工具创建一个 GitHub issue,分配以技能名称命名的标签,并根据其所属的技能将其分配给 .github/CODEOWNERS 文件中列出的代码所有者:
create_issue({
owner: "microsoft", repo: "GitHub-Copilot-for-Azure",
title: "Integration test failure: <skill> – <keywords> [<root-cause-category>]",
labels: ["bug", "integration-test", "test-failure", "<skill>"],
body: "<body>",
assignees: ["<codeowners>"]
})
标题格式: Integration test failure: {skill} – {keywords} [{root-cause-category}]
{keywords}: 来自测试名称的 2-4 个词 — 应用类型(function app, static web app)+ IaC 类型(Terraform, Bicep)+ 相关触发器{root-cause-category}: 步骤 5 中的类别之一,用方括号括起来Issue 正文模板 — 参见 issue-template.md。
⚠️ 注意: 请不要在 issue 正文中包含错误详情(JUnit XML)或代理元数据部分。保持 issue 简洁,仅包含诊断、提示上下文、技能报告上下文和环境部分。
对于 azure-deploy 集成测试,包含一个“azure-deploy 技能调用”部分,显示 azure-deploy 是否被调用(是/否),并注明完整链是 azure-prepare → azure-validate → azure-deploy。对于所有其他集成测试,包含一个“{skill} 技能调用”部分,仅显示主要被测技能是否被调用。
| 错误 | 原因 | 修复 |
|---|---|---|
no artifacts found | 运行没有可上传的报告 | 验证运行是否完成了“导出报告”步骤 |
actions_get 出现 HTTP 404 | 无效的运行 ID 或无访问权限 | 检查运行 ID 并确保 MCP 令牌具有仓库访问权限 |
rate limit exceeded | GitHub API 调用过多 | 等待并重试;减少并发的 MCP 工具调用 |
| 制品 ZIP 下载被阻止 | AWF 沙箱限制对 blob 存储的出站 HTTP 请求 | 使用 get_job_logs MCP 工具从作业日志中获取失败详情;根据元数据尽力生成分析 |
每周安装数
261
仓库
GitHub Stars
160
首次出现
2026年3月6日
安全审计
安装于
gemini-cli199
codex199
opencode184
cursor183
github-copilot181
kimi-cli180
Downloads artifacts from a GitHub Actions integration test run, generates a summarized skill invocation report, and files GitHub issues for each test failure with root-cause analysis.
| Parameter | Required | Description |
|---|---|---|
| Run ID or URL | Yes | GitHub Actions run ID (e.g. 22373768875) or full URL |
| Comparison Run | No | Second run ID/URL for side-by-side comparison |
All tools use owner: "microsoft" and repo: "GitHub-Copilot-for-Azure" as fixed parameters. method selects the operation within the tool.
| Tool | method | Key Parameter | Purpose |
|---|---|---|---|
actions_get | get_workflow_run | resource_id: run ID | Fetch run status and metadata |
actions_list | list_workflow_run_artifacts | resource_id: run ID | List all artifacts for a run |
Extract the numeric run ID from the input (strip URL prefix if needed)
Fetch run metadata using the MCP actions_get tool:
actions_get({ method: "get_workflow_run", owner: "microsoft", repo: "GitHub-Copilot-for-Azure", resource_id: "<run-id>" })
List artifacts using the MCP actions_list tool, then download each relevant artifact:
// List artifacts actions_list({ method: "list_workflow_run_artifacts", owner: "microsoft", repo: "GitHub-Copilot-for-Azure", resource_id: "<run-id>" }) // Download individual artifacts by ID actions_get({ method: "download_workflow_run_artifact", owner: "microsoft", repo: "GitHub-Copilot-for-Azure", resource_id: "<artifact-id>" })
The download returns a temporary URL. Fetch the ZIP archive from that URL and extract it locally. If the environment restricts outbound HTTP (e.g. AWF sandbox), record in the analysis report that artifact content was unavailable and fall back to job logs via the get_job_logs MCP tool.
Locate these files in the downloaded artifacts:
junit.xml — test pass/fail/skip/error results*-SKILL-REPORT.md — generated skill report with per-test detailsagent-metadata-*.md files — raw agent session logs per test⚠️ Note: If artifact ZIP files cannot be downloaded due to network restrictions, or if downloaded files cannot be extracted, use the
get_job_logsMCP tool to identify test failures and produce a best-effort analysis from whatever data is accessible.
Produce a markdown report with four sections. See report-format.md for the exact template.
Section 1 — Test Results Overview
Parse junit.xml to build:
| Metric | Value |
|---|---|
| Total tests | count from <testsuites tests=…> |
| Executed | total − skipped |
| Skipped | count of <skipped/> elements |
| Passed | executed − failures − errors |
| Failed | count of <failure> elements |
| Test Pass Rate | passed / executed as % |
Include a per-test table with name, duration (from time attribute, convert seconds to Xm Ys), and Pass/Fail result.
Section 2 — Skill Invocation Rate
Read the SKILL-REPORT.md "Per-Test Case Results" sections. For each executed test determine whether the skill under test was invoked.
The skills to track depend on which integration test suite the run belongs to:
azure-deploy integration tests — track the full deployment chain:
| Skill | How to detect |
|---|---|
azure-prepare | Mentioned as invoked in the narrative or agent-metadata |
azure-validate | Mentioned as invoked in the narrative or agent-metadata |
azure-deploy | Mentioned as invoked in the narrative or agent-metadata |
Build a per-test invocation matrix (Yes/No for each skill) and compute rates:
| Skill | Invocation Rate |
|---|---|
| azure-deploy | X% (n/total) |
| azure-prepare | X% (n/total) |
| azure-validate | X% (n/total) |
| Full skill chain (P→V→D) | X% (n/total) |
The azure-deploy integration tests exercise the full deployment workflow where the agent is expected to invoke azure-prepare, azure-validate, and azure-deploy in sequence. This three-skill chain tracking is specific to azure-deploy tests only.
All other integration tests — track only the skill under test:
| Skill | Invocation Rate |
|---|---|
| {skill-under-test} | X% (n/total) |
For non-deploy tests (e.g. azure-prepare, azure-ai, azure-kusto), only track whether the primary skill under test was invoked. Do not include azure-prepare/azure-validate/azure-deploy chain columns.
Section 3 — Report Confidence & Pass Rate
Extract from SKILL-REPORT.md:
Section 4 — Comparison (only when a second run is provided)
Repeat Phase 1–3 for the second run, then produce a side-by-side delta table. See report-format.md § Comparison.
For Skill Invocation Success Rate that is available and is less than 80%, create a GitHub issue using the create_issue MCP tool, assign the label with the same name as the skill, and assign it to the code owners listed in .github/CODEOWNERS file based on which skill it is for:
create_issue({
owner: "microsoft", repo: "GitHub-Copilot-for-Azure",
title: "Integration test failure: <skill> – skill-invocation",
labels: ["bug", "integration-test", "test-failure", "skill-invocation", "<skill>"],
body: "<body>",
assignees: ["<codeowners>"]
})
Issue body template — see issue-template.md.
For every test with a <failure> element in junit.xml:
Read the failure message and file:line from the XML
Read the actual line of code from the test file at that location
Read the agent-metadata-*.md for that test from the artifacts
Read the corresponding section in the SKILL-REPORT.md for context on what the agent did
Determine root cause category:
Search for existing open issue before creating a new one using the search_issues MCP tool:
search_issues({ owner: "microsoft", repo: "GitHub-Copilot-for-Azure", query: "Integration test failure: {skill} in:title is:open" })
Match criteria: an open issue whose title and body describe a similar problem. If a match is found, skip issue creation for this failure and note the existing issue number(s) in the summary report.
7. If no existing issue was found, create a GitHub issue using the create_issue MCP tool, assign the label with the name of the skill, and assign it to the code owners listed in .github/CODEOWNERS file based on which skill it is for:
create_issue({
owner: "microsoft", repo: "GitHub-Copilot-for-Azure",
title: "Integration test failure: <skill> – <keywords> [<root-cause-category>]",
labels: ["bug", "integration-test", "test-failure", "<skill>"],
body: "<body>",
assignees: ["<codeowners>"]
})
Title format: Integration test failure: {skill} – {keywords} [{root-cause-category}]
{keywords}: 2-4 words from the test name — app type (function app, static web app) + IaC type (Terraform, Bicep) + trigger if relevant{root-cause-category}: one of the categories from step 5 in bracketsIssue body template — see issue-template.md.
⚠️ Note: Do NOT include the Error Details (JUnit XML) or Agent Metadata sections in the issue body. Keep issues concise with the diagnosis, prompt context, skill report context, and environment sections only.
For azure-deploy integration tests, include an "azure-deploy Skill Invocation" section showing whether azure-deploy was invoked (Yes/No), with a note that the full chain is azure-prepare → azure-validate → azure-deploy. For all other integration tests, include a "{skill} Skill Invocation" section showing only whether the primary skill under test was invoked.
| Error | Cause | Fix |
|---|---|---|
no artifacts found | Run has no uploadable reports | Verify the run completed the "Export report" step |
HTTP 404 on actions_get | Invalid run ID or no access | Check the run ID and ensure the MCP token has repo access |
rate limit exceeded | Too many GitHub API calls | Wait and retry; reduce concurrent MCP tool calls |
| Artifact ZIP download blocked | AWF sandbox restricts outbound HTTP to blob storage | Use get_job_logs MCP tool to get failure details from job logs; produce best-effort analysis from metadata |
Weekly Installs
261
Repository
GitHub Stars
160
First Seen
Mar 6, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
gemini-cli199
codex199
opencode184
cursor183
github-copilot181
kimi-cli180
Azure Data Explorer (Kusto) 查询技能:KQL数据分析、日志遥测与时间序列处理
102,600 周安装
竞争对手研究指南:SEO、内容、反向链接与定价分析工具
231 周安装
Azure 工作负载自动升级评估工具 - 支持 Functions、App Service 计划与 SKU 迁移
231 周安装
Kaizen持续改进方法论:软件开发中的渐进式优化与防错设计实践指南
231 周安装
软件UI/UX设计指南:以用户为中心的设计原则、WCAG可访问性与平台规范
231 周安装
Apify 网络爬虫和自动化平台 - 无需编码抓取亚马逊、谷歌、领英等网站数据
231 周安装
llama.cpp 中文指南:纯 C/C++ LLM 推理,CPU/非 NVIDIA 硬件优化部署
231 周安装
actions_get | download_workflow_run_artifact | resource_id: artifact ID | Get a temporary download URL for an artifact ZIP |
get_job_logs | — | run_id + failed_only: true | Retrieve job logs when artifact content is inaccessible |
search_issues | — | query: search string | Find existing open issues before creating new ones |
create_issue | — | title, body, labels, assignees | File a new GitHub issue for a test failure |