The Agent Skills Directory
npx skills add https://smithery.ai/skills/openshift-eng/jira-pull-request-extractor此技能递归地发现 Jira 问题并提取所有关联的 GitHub Pull Request 链接。
AI 重要提示:这是一个程序性技能 - 当被调用时,您应直接执行本文档中定义的实现步骤。请勿查找或执行外部脚本(如 extract_prs.py)。请遵循下方“实现”部分中的分步说明。
在您需要时使用此技能:
关键特性:
childIssuesOf() JQL 自动发现所有后代问题plugins/jira/README.md 了解设置)jq 用于 JSON 解析gh) 用于获取 PR 元数据目的:此技能返回结构化 JSON,作为供消费技能和命令使用的接口契约。
交付方式:
.work/extract-prs/{issue-key}/output.json模式版本:1.0
{
"schema_version": "1.0",
"metadata": {
"generated_at": "2025-11-24T10:30:00Z",
"command": "extract_prs",
"input_issue": "OCPSTRAT-1612"
},
"pull_requests": [
{
"url": "https://github.com/openshift/hypershift/pull/6444",
"state": "MERGED",
"title": "Add support for custom OVN subnets",
"isDraft": false,
"sources": ["comment", "description", "remote_link"],
"found_in_issues": ["CNTRLPLANE-1201", "OCPSTRAT-1612"]
}
]
}
字段:
schema_version:格式版本 ("1.0")metadata:生成时间戳、命令名称和输入问题pull_requests:PR 对象数组,包含 url、state、title、isDraft、sources 和 found_in_issues此技能分三个主要阶段运行:
使用 Jira 的 childIssuesOf() JQL 函数(自动递归)发现所有后代问题。
实现:
使用 MCP Jira 工具获取问题元数据:
mcp__atlassian__jira_get_issue(
issue_key=<issue-key>,
fields="summary,description,issuetype,status,comment",
expand="changelog"
)
fields.description - 用于基于文本的 PR URL 提取fields.comment.comments - 用于评论中提到的 PR URLchangelog.histories - 用于来自 RemoteIssueLink 字段变更的远程链接 PR URL使用 JQL 搜索所有后代问题:
mcp__atlassian__jira_search(
jql="issue in childIssuesOf(<issue-key>)",
fields="key",
limit=100
)
childIssuesOf() - 返回所有后代问题(Epics、Stories、Subtasks 等),无论深度如何从两个来源提取 PR URL:
从问题变更日志中提取远程链接(使用具有身份验证访问权限的 MCP):
# 获取带有变更日志扩展的问题(存储在变量中)
issue_json=$(mcp__atlassian__jira_get_issue \
issue_key="${issue_key}" \
expand="changelog")
# 从变更日志中提取 RemoteIssueLink 条目
pr_urls=$(echo "$issue_json" | jq -r '
.changelog.histories[]?.items[]? |
select(.field == "RemoteIssueLink") |
.toString // .to_string |
match("https://github\\.com/[^/]+/[^/]+/(pull|pulls)/[0-9]+") |
.string
' | sort -u)
重要:
RemoteIssueLink 字段变更toString 或 to_string 字段中过滤匹配 /pull/ 或 /pulls/ 模式的 GitHub PR URL
搜索 fields.description 和 fields.comment.comments[](已在阶段 1 中获取)
描述:从 fields.description 中提取(纯文本或 Jira wiki 格式)
评论:遍历 fields.comment.comments[] 数组并搜索每个 comment.body
使用正则表达式:https?://github\.com/([\w-]+)/([\w-]+)/pulls?/(\d+)
提取示例(使用变量):
# 从描述中提取(从 MCP 响应存储在变量中)
description_prs=$(echo "$description" | \
grep -oE 'https?://github\.com/[^/]+/[^/]+/pulls?/[0-9]+')
# 从评论中提取(在内存中解析 JSON)
comment_prs=$(echo "$issue_json" | \
jq -r '.fields.comment.comments[]?.body // empty' | \
grep -oE 'https?://github\.com/[^/]+/[^/]+/pulls?/[0-9]+')
# 合并所有 PR
all_prs=$(echo -e "${description_prs}\n${comment_prs}" | sort -u)
去重:
sources 数组:["comment", "description", "remote_link"](按字母顺序排序)
"comment":在问题评论中找到"description":在问题描述中找到"remote_link":通过 Jira 远程链接 API 找到found_in_issues 数组:["OCPSTRAT-1612", "CNTRLPLANE-1201"](按字母顺序排序)PR 元数据:通过 gh pr view {url} --json state,title,isDraft 获取。关键:构建输出 JSON 时,您必须使用 gh pr view 返回的确切值 - 请勿手动输入或猜测 PR 状态/标题。
输出:使用 jq -n 在内存中构建 JSON,输出到控制台。仅在用户明确请求时才保存到 .work/extract-prs/{issue-key}/output.json。
重要:对所有数据使用 bash 变量 - 不使用临时文件以避免用户确认提示。
expand="changelog" 返回错误,则仅继续基于文本的提取(优雅降级)pull_requests 数组(有效结果)jira_search 中的 limit 参数gh pr view 因速率限制而失败,则显示包含重置时间的错误gh pr view 返回错误(PR 已删除/私有),则从输出中排除该 PRAPI 调用:1 次 jira_search + N 次 jira_get_issue(带变更日志)+ M 次 gh pr view
文件 I/O:将 PR 元数据保存到 .work/extract-prs/{issue-key}/pr-*-metadata.json,通过读取文件构建最终 JSON
每周安装次数
–
来源
首次出现
–
This skill recursively discovers Jira issues and extracts all associated GitHub Pull Request links.
IMPORTANT FOR AI : This is a procedural skill - when invoked, you should directly execute the implementation steps defined in this document. Do NOT look for or execute external scripts (like extract_prs.py). Follow the step-by-step instructions in the "Implementation" section below.
Use this skill when you need to:
Key characteristics :
childIssuesOf() JQL广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
key 字段 - 将在阶段 2 中为每个问题获取完整数据(包括变更日志)jira_search 不支持 expand 参数 - 对每个问题使用带 expand="changelog" 的 jira_get_issue获取每个问题的完整数据(包括根问题和所有后代):
for each issue_key:
mcp__atlassian__jira_get_issue(
issue_key=<issue-key>,
fields="summary,description,issuetype,status,comment",
expand="changelog"
)
relates to、blocks 等)- 仅包含父子关系plugins/jira/README.mdjq installed for JSON parsinggh) installed and authenticated for fetching PR metadataPurpose : This skill returns structured JSON that serves as an interface contract for consuming skills and commands.
Delivery Method :
.work/extract-prs/{issue-key}/output.json if user explicitly requests to save resultsSchema Version : 1.0
{
"schema_version": "1.0",
"metadata": {
"generated_at": "2025-11-24T10:30:00Z",
"command": "extract_prs",
"input_issue": "OCPSTRAT-1612"
},
"pull_requests": [
{
"url": "https://github.com/openshift/hypershift/pull/6444",
"state": "MERGED",
"title": "Add support for custom OVN subnets",
"isDraft": false,
"sources": ["comment", "description", "remote_link"],
"found_in_issues": ["CNTRLPLANE-1201", "OCPSTRAT-1612"]
}
]
}
Fields :
schema_version: Format version ("1.0")metadata: Generation timestamp, command name, and input issuepull_requests: Array of PR objects with url, state, title, isDraft, sources, and found_in_issuesThe skill operates in three main phases:
Discovers all descendant issues using Jira's childIssuesOf() JQL function (automatically recursive).
Implementation :
Fetch issue metadata using MCP Jira tool :
mcp__atlassian__jira_get_issue(
issue_key=<issue-key>,
fields="summary,description,issuetype,status,comment",
expand="changelog"
)
fields.description - for text-based PR URL extractionfields.comment.comments - for PR URLs mentioned in commentschangelog.histories - for remote link PR URLs from RemoteIssueLink field changesSearch for ALL descendant issues using JQL :
mcp__atlassian__jira_search(
jql="issue in childIssuesOf(<issue-key>)",
fields="key",
limit=100
)
childIssuesOf() is already recursive - returns ALL descendant issues (Epics, Stories, Subtasks, etc.) regardless of depthkey field here - will fetch full data (including changelog) per-issue in Phase 2jira_search does NOT support expand parameter - use jira_get_issue with expand="changelog" for each issueFetch full data for each issue (including root + all descendants):
for each issue_key:
mcp__atlassian__jira_get_issue(
issue_key=<issue-key>,
fields="summary,description,issuetype,status,comment",
expand="changelog"
)
relates to, blocks, etc.) - only parent-child relationshipsExtracts PR URLs from two sources:
Extract remote links from issue changelog (using MCP with authenticated access):
# Fetch issue with changelog expansion (store in variable)
issue_json=$(mcp__atlassian__jira_get_issue \
issue_key="${issue_key}" \
expand="changelog")
# Extract RemoteIssueLink entries from changelog
pr_urls=$(echo "$issue_json" | jq -r '
.changelog.histories[]?.items[]? |
select(.field == "RemoteIssueLink") |
.toString // .to_string |
match("https://github\\.com/[^/]+/[^/]+/(pull|pulls)/[0-9]+") |
.string
' | sort -u)
Important :
RemoteIssueLink field changestoString or to_string fieldFilters for GitHub PR URLs matching /pull/ or /pulls/ pattern
Searches both fields.description and fields.comment.comments[] (already fetched in Phase 1)
Description : Extract from fields.description (plain text or Jira wiki format)
Comments : Iterate through fields.comment.comments[] array and search each comment.body
Uses regex: https?://github\.com/([\w-]+)/([\w-]+)/pulls?/(\d+)
Example extraction (using variables) :
# From description (stored in variable from MCP response)
description_prs=$(echo "$description" | \
grep -oE 'https?://github\.com/[^/]+/[^/]+/pulls?/[0-9]+')
# From comments (parse JSON in memory)
comment_prs=$(echo "$issue_json" | \
jq -r '.fields.comment.comments[]?.body // empty' | \
grep -oE 'https?://github\.com/[^/]+/[^/]+/pulls?/[0-9]+')
# Combine all PRs
all_prs=$(echo -e "${description_prs}\n${comment_prs}" | sort -u)
Deduplication :
sources array: ["comment", "description", "remote_link"] (alphabetically sorted)
"comment": Found in issue comments"description": Found in issue description"remote_link": Found via Jira Remote Links APIfound_in_issues array: ["OCPSTRAT-1612", "CNTRLPLANE-1201"] (alphabetically sorted)PR Metadata : Fetch via gh pr view {url} --json state,title,isDraft. CRITICAL : When building the output JSON, you MUST use the exact values returned by gh pr view - do NOT manually type or guess PR states/titles.
Output : Build JSON in memory using jq -n, output to console. Only save to .work/extract-prs/{issue-key}/output.json if user explicitly requests.
Important : Use bash variables for all data - no temporary files to avoid user confirmation prompts.
expand="changelog" returns error, continue with text-based extraction only (graceful degradation)pull_requests array (valid result)limit parameter in jira_searchgh pr view fails due to rate limiting, display error with reset timegh pr view returns error (PR deleted/private), exclude that PR from outputAPI calls : 1 jira_search + N jira_get_issue (with changelog) + M gh pr view
File I/O : Save PR metadata to .work/extract-prs/{issue-key}/pr-*-metadata.json, build final JSON by reading files
Weekly Installs
–
Source
First Seen
–
Azure Data Explorer (Kusto) 查询技能:KQL数据分析、日志遥测与时间序列处理
107,900 周安装