tech-news-digest by draco-agent/tech-news-digest
npx skills add https://github.com/draco-agent/tech-news-digest --skill tech-news-digest自动化科技新闻摘要系统,包含统一数据源模型、质量评分流水线和基于模板的输出生成。
配置设置:默认配置位于 config/defaults/。复制到工作空间进行自定义:
mkdir -p workspace/config cp config/defaults/sources.json workspace/config/tech-news-digest-sources.json cp config/defaults/topics.json workspace/config/tech-news-digest-topics.json
环境变量:
* `TWITTERAPI_IO_KEY` \- twitterapi.io API 密钥(可选,首选)
* `X_BEARER_TOKEN` \- Twitter/X 官方 API 持有者令牌(可选,备用)
* `TAVILY_API_KEY` \- Tavily 搜索 API 密钥,替代 Brave(可选)
* `WEB_SEARCH_BACKEND` \- 网页搜索后端:auto|brave|tavily(可选,默认:auto)
* `BRAVE_API_KEYS` \- Brave 搜索 API 密钥,逗号分隔用于轮换(可选)
* `BRAVE_API_KEY` \- 单个 Brave 密钥备用(可选)
* `GITHUB_TOKEN` \- GitHub 个人访问令牌(可选,提高速率限制)
3. 生成摘要:
# 统一流水线(推荐)— 并行运行所有 6 个数据源 + 合并
python3 scripts/run-pipeline.py \
--defaults config/defaults \
--config workspace/config \
--hours 48 --freshness pd \
--archive-dir workspace/archive/tech-news-digest/ \
--output /tmp/td-merged.json --verbose --force
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
4. 使用模板:将 Discord、电子邮件或 PDF 模板应用于合并后的输出
sources.json - 统一数据源{
"sources": [
{
"id": "openai-rss",
"type": "rss",
"name": "OpenAI Blog",
"url": "https://openai.com/blog/rss.xml",
"enabled": true,
"priority": true,
"topics": ["llm", "ai-agent"],
"note": "Official OpenAI updates"
},
{
"id": "sama-twitter",
"type": "twitter",
"name": "Sam Altman",
"handle": "sama",
"enabled": true,
"priority": true,
"topics": ["llm", "frontier-tech"],
"note": "OpenAI CEO"
}
]
}
topics.json - 增强主题定义{
"topics": [
{
"id": "llm",
"emoji": "🧠",
"label": "LLM / Large Models",
"description": "Large Language Models, foundation models, breakthroughs",
"search": {
"queries": ["LLM latest news", "large language model breakthroughs"],
"must_include": ["LLM", "large language model", "foundation model"],
"exclude": ["tutorial", "beginner guide"]
},
"display": {
"max_items": 8,
"style": "detailed"
}
}
]
}
run-pipeline.py - 统一流水线(推荐)python3 scripts/run-pipeline.py \
--defaults config/defaults [--config CONFIG_DIR] \
--hours 48 --freshness pd \
--archive-dir workspace/archive/tech-news-digest/ \
--output /tmp/td-merged.json --verbose --force
*.meta.json$GITHUB_TOKEN,则自动生成 GitHub App 令牌fetch-rss.py - RSS 源获取器python3 scripts/fetch-rss.py [--defaults DIR] [--config DIR] [--hours 48] [--output FILE] [--verbose]
fetch-twitter.py - Twitter/X KOL 监控器python3 scripts/fetch-twitter.py [--defaults DIR] [--config DIR] [--hours 48] [--output FILE] [--backend auto|official|twitterapiio]
TWITTERAPI_IO_KEY 则使用 twitterapi.io,否则如果设置了 X_BEARER_TOKEN 则使用官方 X API v2fetch-web.py - 网页搜索引擎python3 scripts/fetch-web.py [--defaults DIR] [--config DIR] [--freshness pd] [--output FILE]
fetch-github.py - GitHub 发布监控器python3 scripts/fetch-github.py [--defaults DIR] [--config DIR] [--hours 168] [--output FILE]
$GITHUB_TOKEN → GitHub App 自动生成 → gh CLI → 未认证(60 次请求/小时)fetch-github.py --trending - GitHub 趋势仓库python3 scripts/fetch-github.py --trending [--hours 48] [--output FILE] [--verbose]
fetch-reddit.py - Reddit 帖子获取器python3 scripts/fetch-reddit.py [--defaults DIR] [--config DIR] [--hours 48] [--output FILE]
enrich-articles.py - 文章全文增强python3 scripts/enrich-articles.py --input merged.json --output enriched.json [--min-score 10] [--max-articles 15] [--verbose]
merge-sources.py - 质量评分与去重python3 scripts/merge-sources.py --rss FILE --twitter FILE --web FILE --github FILE --reddit FILE
validate-config.py - 配置验证器python3 scripts/validate-config.py [--defaults DIR] [--config DIR] [--verbose]
generate-pdf.py - PDF 报告生成器python3 scripts/generate-pdf.py --input report.md --output digest.pdf [--verbose]
weasyprint。sanitize-html.py - 安全 HTML 电子邮件转换器python3 scripts/sanitize-html.py --input report.md --output email.html [--verbose]
source-health.py - 数据源健康监控器python3 scripts/source-health.py --rss FILE --twitter FILE --github FILE --reddit FILE --web FILE [--verbose]
summarize-merged.py - 合并数据摘要python3 scripts/summarize-merged.py --input merged.json [--top N] [--topic TOPIC]
将自定义配置放在 workspace/config/ 以覆盖默认值:
"enabled": false 禁用默认数据源id 的数据源 → 用户版本优先id 的数据源 → 追加到默认值之后id 的主题 → 用户版本完全替换默认版本// workspace/config/tech-news-digest-sources.json
{
"sources": [
{
"id": "simonwillison-rss",
"enabled": false,
"note": "Disabled: too noisy for my use case"
},
{
"id": "my-custom-blog",
"type": "rss",
"name": "My Custom Tech Blog",
"url": "https://myblog.com/rss",
"enabled": true,
"priority": true,
"topics": ["frontier-tech"]
}
]
}
references/templates/discord.md)<link>)references/templates/email.md)references/templates/pdf.md)scripts/generate-pdf.py 生成(需要 weasyprint)所有数据源均已预配置适当的主题标签和优先级级别。
pip install -r requirements.txt
可选但推荐:
feedparser>=6.0.0 - 更好的 RSS 解析(如果不可用则回退到正则表达式)jsonschema>=4.0.0 - 配置验证所有脚本仅需 Python 3.8+ 标准库即可运行。
# 验证配置
python3 scripts/validate-config.py --verbose
# 测试 RSS 源
python3 scripts/fetch-rss.py --hours 1 --verbose
# 检查 Twitter API
python3 scripts/fetch-twitter.py --hours 1 --verbose
<workspace>/archive/tech-news-digest/在 ~/.zshenv 或类似文件中设置:
# Twitter(Twitter 数据源至少需要一个)
export TWITTERAPI_IO_KEY="your_key" # twitterapi.io 密钥(首选)
export X_BEARER_TOKEN="your_bearer_token" # 官方 X API v2(备用)
export TWITTER_API_BACKEND="auto" # auto|twitterapiio|official(默认:auto)
# 网页搜索(可选,启用网页搜索层)
export WEB_SEARCH_BACKEND="auto" # auto|brave|tavily(默认:auto)
export TAVILY_API_KEY="tvly-xxx" # Tavily 搜索 API(免费 1000 次/月)
# Brave 搜索(替代方案)
export BRAVE_API_KEYS="key1,key2,key3" # 多个密钥,逗号分隔用于轮换
export BRAVE_API_KEY="key1" # 单个密钥备用
export BRAVE_PLAN="free" # 覆盖速率限制检测:free|pro
# GitHub(可选,提高速率限制)
export GITHUB_TOKEN="ghp_xxx" # PAT(最简单)
export GH_APP_ID="12345" # 或使用 GitHub App 自动生成令牌
export GH_APP_INSTALL_ID="67890"
export GH_APP_KEY_FILE="/path/to/key.pem"
TWITTERAPI_IO_KEY(每月 $3-5);X_BEARER_TOKEN 作为备用;auto 模式首先尝试 twitterapiiocron 提示词不应硬编码流水线步骤。相反,应引用 references/digest-prompt.md 并仅传递配置参数。这确保了流水线逻辑保留在技能仓库中,并在所有安装中保持一致。
Read <SKILL_DIR>/references/digest-prompt.md and follow the complete workflow to generate a daily digest.
Replace placeholders with:
- MODE = daily
- TIME_WINDOW = past 1-2 days
- FRESHNESS = pd
- RSS_HOURS = 48
- ITEMS_PER_SECTION = 3-5
- ENRICH = true
- BLOG_PICKS_COUNT = 3
- EXTRA_SECTIONS = (none)
- SUBJECT = Daily Tech Digest - YYYY-MM-DD
- WORKSPACE = <your workspace path>
- SKILL_DIR = <your skill install path>
- DISCORD_CHANNEL_ID = <your channel id>
- EMAIL = (optional)
- LANGUAGE = English
- TEMPLATE = discord
Follow every step in the prompt template strictly. Do not skip any steps.
Read <SKILL_DIR>/references/digest-prompt.md and follow the complete workflow to generate a weekly digest.
Replace placeholders with:
- MODE = weekly
- TIME_WINDOW = past 7 days
- FRESHNESS = pw
- RSS_HOURS = 168
- ITEMS_PER_SECTION = 10-15
- ENRICH = true
- BLOG_PICKS_COUNT = 3-5
- EXTRA_SECTIONS = 📊 Weekly Trend Summary (2-3 sentences summarizing macro trends)
- SUBJECT = Weekly Tech Digest - YYYY-MM-DD
- WORKSPACE = <your workspace path>
- SKILL_DIR = <your skill install path>
- DISCORD_CHANNEL_ID = <your channel id>
- EMAIL = (optional)
- LANGUAGE = English
- TEMPLATE = discord
Follow every step in the prompt template strictly. Do not skip any steps.
digest-prompt.md 中,而非分散在 cron 配置中OpenClaw 强制执行跨提供商隔离:单个会话只能向一个提供商发送消息(例如,Discord 或 Telegram,不能同时发送)。如果您需要向多个平台投递摘要,请为每个提供商创建独立的 cron 作业:
# 作业 1:Discord + 电子邮件
- DISCORD_CHANNEL_ID = <your-discord-channel-id>
- EMAIL = user@example.com
- TEMPLATE = discord
# 作业 2:Telegram 私信
- DISCORD_CHANNEL_ID = (none)
- EMAIL = (none)
- TEMPLATE = telegram
在第二个作业的提示词中,将 DISCORD_CHANNEL_ID 投递替换为目标平台的投递。
这是一个安全特性,而非缺陷 — 它防止了意外的跨上下文数据泄露。
此技能使用提示词模板模式:智能体读取 digest-prompt.md 并遵循其指令。这是标准的 OpenClaw 技能执行模型 — 智能体解释技能提供文件中的结构化指令。所有指令都随技能包一起提供,可以在安装前进行审计。
Python 脚本向以下地址发出出站请求:
tech-news-digest-sources.json 中配置)api.x.com 或 api.twitterapi.io)api.search.brave.com)api.tavily.com)api.github.com)reddit.com)不会向任何其他端点发送数据。所有 API 密钥均从技能元数据中声明的环境变量中读取。
电子邮件投递使用 send-email.py,它构建了带有 HTML 正文 + 可选 PDF 附件的正确 MIME 多部分消息。主题格式是硬编码的(Daily Tech Digest - YYYY-MM-DD)。PDF 生成通过 weasyprint 使用 generate-pdf.py。提示词模板明确禁止将不受信任的内容(文章标题、推文文本等)插入到 shell 参数中。电子邮件地址和主题必须是静态的占位符值。
脚本从 config/ 读取,并向 workspace/archive/ 写入。不访问工作空间之外的文件。
--verbose 获取详细信息validate-config.py 查看具体问题--hours)和数据源启用状态所有脚本都支持 --verbose 标志,用于详细日志记录和故障排除。
MAX_WORKERSTIMEOUTMAX_ARTICLES_PER_FEED摘要提示词指示智能体通过 shell 命令运行 Python 脚本。所有脚本路径和参数都是技能定义的常量 — 没有用户输入被插入到命令中。两个脚本使用 subprocess:
run-pipeline.py 编排子获取脚本(均在 scripts/ 目录内)fetch-github.py 有两个子进程调用:
openssl dgst -sha256 -sign 用于 JWT 签名(仅在设置了 GH_APP_* 环境变量时使用 — 对自构造的 JWT 负载进行签名,不涉及用户内容)gh auth token CLI 备用方案(仅在安装了 gh 时使用 — 从 gh 自己的凭据存储中读取)没有用户提供或获取的内容被插入到子进程参数中。电子邮件投递使用 send-email.py,它以编程方式构建 MIME 消息 — 没有 shell 插值。PDF 生成通过 weasyprint 使用 generate-pdf.py。电子邮件主题仅为静态格式字符串 — 从不从获取的数据构造。
脚本不直接读取 ~/.config/、~/.ssh/ 或任何凭据文件。所有 API 令牌均从技能元数据中声明的环境变量中读取。GitHub 认证级联如下:
$GITHUB_TOKEN 环境变量(您控制提供的内容)GH_APP_ID、GH_APP_INSTALL_ID 和 GH_APP_KEY_FILE 时使用 — 通过 openssl CLI 使用内联 JWT 签名,不涉及外部脚本)gh auth token CLI(委托给 gh 自己的安全凭据存储)如果您不希望自动发现凭据,只需设置 $GITHUB_TOKEN,脚本将直接使用它,而不会尝试步骤 2-3。
此技能不安装任何软件包。requirements.txt 仅列出可选依赖项(feedparser、jsonschema)供参考。所有脚本仅需 Python 3.8+ 标准库即可运行。用户应在虚拟环境中安装可选依赖项(如果需要)— 技能从不运行 pip install。
脚本向配置的 RSS 源、Twitter API、GitHub API、Reddit JSON API、Brave 搜索 API 和 Tavily 搜索 API 发出出站 HTTP 请求。不创建入站连接或监听器。
每周安装次数
412
仓库
GitHub 星标数
45
首次出现
2026年2月18日
安全审计
安装于
github-copilot410
codex410
opencode410
gemini-cli409
kimi-cli409
amp409
Automated tech news digest system with unified data source model, quality scoring pipeline, and template-based output generation.
Configuration Setup : Default configs are in config/defaults/. Copy to workspace for customization:
mkdir -p workspace/config cp config/defaults/sources.json workspace/config/tech-news-digest-sources.json cp config/defaults/topics.json workspace/config/tech-news-digest-topics.json
Environment Variables :
TWITTERAPI_IO_KEY - twitterapi.io API key (optional, preferred)X_BEARER_TOKEN - Twitter/X official API bearer token (optional, fallback)TAVILY_API_KEY - Tavily Search API key, alternative to Brave (optional)WEB_SEARCH_BACKEND - Web search backend: auto|brave|tavily (optional, default: auto)BRAVE_API_KEYS - Brave Search API keys, comma-separated for rotation (optional)BRAVE_API_KEY - Single Brave key fallback (optional)GITHUB_TOKEN - GitHub personal access token (optional, improves rate limits)Generate Digest :
python3 scripts/run-pipeline.py \
--defaults config/defaults \
--config workspace/config \
--hours 48 --freshness pd \
--archive-dir workspace/archive/tech-news-digest/ \
--output /tmp/td-merged.json --verbose --force
4. Use Templates : Apply Discord, email, or PDF templates to merged output
sources.json - Unified Data Sources{
"sources": [
{
"id": "openai-rss",
"type": "rss",
"name": "OpenAI Blog",
"url": "https://openai.com/blog/rss.xml",
"enabled": true,
"priority": true,
"topics": ["llm", "ai-agent"],
"note": "Official OpenAI updates"
},
{
"id": "sama-twitter",
"type": "twitter",
"name": "Sam Altman",
"handle": "sama",
"enabled": true,
"priority": true,
"topics": ["llm", "frontier-tech"],
"note": "OpenAI CEO"
}
]
}
topics.json - Enhanced Topic Definitions{
"topics": [
{
"id": "llm",
"emoji": "🧠",
"label": "LLM / Large Models",
"description": "Large Language Models, foundation models, breakthroughs",
"search": {
"queries": ["LLM latest news", "large language model breakthroughs"],
"must_include": ["LLM", "large language model", "foundation model"],
"exclude": ["tutorial", "beginner guide"]
},
"display": {
"max_items": 8,
"style": "detailed"
}
}
]
}
run-pipeline.py - Unified Pipeline (Recommended)python3 scripts/run-pipeline.py \
--defaults config/defaults [--config CONFIG_DIR] \
--hours 48 --freshness pd \
--archive-dir workspace/archive/tech-news-digest/ \
--output /tmp/td-merged.json --verbose --force
*.meta.json$GITHUB_TOKEN not setfetch-rss.py - RSS Feed Fetcherpython3 scripts/fetch-rss.py [--defaults DIR] [--config DIR] [--hours 48] [--output FILE] [--verbose]
fetch-twitter.py - Twitter/X KOL Monitorpython3 scripts/fetch-twitter.py [--defaults DIR] [--config DIR] [--hours 48] [--output FILE] [--backend auto|official|twitterapiio]
TWITTERAPI_IO_KEY set, else official X API v2 if X_BEARER_TOKEN setfetch-web.py - Web Search Enginepython3 scripts/fetch-web.py [--defaults DIR] [--config DIR] [--freshness pd] [--output FILE]
fetch-github.py - GitHub Releases Monitorpython3 scripts/fetch-github.py [--defaults DIR] [--config DIR] [--hours 168] [--output FILE]
$GITHUB_TOKEN → GitHub App auto-generate → gh CLI → unauthenticated (60 req/hr)fetch-github.py --trending - GitHub Trending Repospython3 scripts/fetch-github.py --trending [--hours 48] [--output FILE] [--verbose]
fetch-reddit.py - Reddit Posts Fetcherpython3 scripts/fetch-reddit.py [--defaults DIR] [--config DIR] [--hours 48] [--output FILE]
enrich-articles.py - Article Full-Text Enrichmentpython3 scripts/enrich-articles.py --input merged.json --output enriched.json [--min-score 10] [--max-articles 15] [--verbose]
merge-sources.py - Quality Scoring & Deduplicationpython3 scripts/merge-sources.py --rss FILE --twitter FILE --web FILE --github FILE --reddit FILE
validate-config.py - Configuration Validatorpython3 scripts/validate-config.py [--defaults DIR] [--config DIR] [--verbose]
generate-pdf.py - PDF Report Generatorpython3 scripts/generate-pdf.py --input report.md --output digest.pdf [--verbose]
weasyprint.sanitize-html.py - Safe HTML Email Converterpython3 scripts/sanitize-html.py --input report.md --output email.html [--verbose]
source-health.py - Source Health Monitorpython3 scripts/source-health.py --rss FILE --twitter FILE --github FILE --reddit FILE --web FILE [--verbose]
summarize-merged.py - Merged Data Summarypython3 scripts/summarize-merged.py --input merged.json [--top N] [--topic TOPIC]
Place custom configs in workspace/config/ to override defaults:
"enabled": falseid → user version takes precedenceid → appended to defaultsid → user version completely replaces default// workspace/config/tech-news-digest-sources.json
{
"sources": [
{
"id": "simonwillison-rss",
"enabled": false,
"note": "Disabled: too noisy for my use case"
},
{
"id": "my-custom-blog",
"type": "rss",
"name": "My Custom Tech Blog",
"url": "https://myblog.com/rss",
"enabled": true,
"priority": true,
"topics": ["frontier-tech"]
}
]
}
references/templates/discord.md)<link>)references/templates/email.md)references/templates/pdf.md)scripts/generate-pdf.py (requires weasyprint)All sources pre-configured with appropriate topic tags and priority levels.
pip install -r requirements.txt
Optional but Recommended :
feedparser>=6.0.0 - Better RSS parsing (fallback to regex if unavailable)jsonschema>=4.0.0 - Configuration validationAll scripts work with Python 3.8+ standard library only.
# Validate configuration
python3 scripts/validate-config.py --verbose
# Test RSS feeds
python3 scripts/fetch-rss.py --hours 1 --verbose
# Check Twitter API
python3 scripts/fetch-twitter.py --hours 1 --verbose
<workspace>/archive/tech-news-digest/Set in ~/.zshenv or similar:
# Twitter (at least one required for Twitter source)
export TWITTERAPI_IO_KEY="your_key" # twitterapi.io key (preferred)
export X_BEARER_TOKEN="your_bearer_token" # Official X API v2 (fallback)
export TWITTER_API_BACKEND="auto" # auto|twitterapiio|official (default: auto)
# Web Search (optional, enables web search layer)
export WEB_SEARCH_BACKEND="auto" # auto|brave|tavily (default: auto)
export TAVILY_API_KEY="tvly-xxx" # Tavily Search API (free 1000/mo)
# Brave Search (alternative)
export BRAVE_API_KEYS="key1,key2,key3" # Multiple keys, comma-separated rotation
export BRAVE_API_KEY="key1" # Single key fallback
export BRAVE_PLAN="free" # Override rate limit detection: free|pro
# GitHub (optional, improves rate limits)
export GITHUB_TOKEN="ghp_xxx" # PAT (simplest)
export GH_APP_ID="12345" # Or use GitHub App for auto-token
export GH_APP_INSTALL_ID="67890"
export GH_APP_KEY_FILE="/path/to/key.pem"
TWITTERAPI_IO_KEY preferred ($3-5/mo); X_BEARER_TOKEN as fallback; auto mode tries twitterapiio firstThe cron prompt should NOT hardcode the pipeline steps. Instead, reference references/digest-prompt.md and only pass configuration parameters. This ensures the pipeline logic stays in the skill repo and is consistent across all installations.
Read <SKILL_DIR>/references/digest-prompt.md and follow the complete workflow to generate a daily digest.
Replace placeholders with:
- MODE = daily
- TIME_WINDOW = past 1-2 days
- FRESHNESS = pd
- RSS_HOURS = 48
- ITEMS_PER_SECTION = 3-5
- ENRICH = true
- BLOG_PICKS_COUNT = 3
- EXTRA_SECTIONS = (none)
- SUBJECT = Daily Tech Digest - YYYY-MM-DD
- WORKSPACE = <your workspace path>
- SKILL_DIR = <your skill install path>
- DISCORD_CHANNEL_ID = <your channel id>
- EMAIL = (optional)
- LANGUAGE = English
- TEMPLATE = discord
Follow every step in the prompt template strictly. Do not skip any steps.
Read <SKILL_DIR>/references/digest-prompt.md and follow the complete workflow to generate a weekly digest.
Replace placeholders with:
- MODE = weekly
- TIME_WINDOW = past 7 days
- FRESHNESS = pw
- RSS_HOURS = 168
- ITEMS_PER_SECTION = 10-15
- ENRICH = true
- BLOG_PICKS_COUNT = 3-5
- EXTRA_SECTIONS = 📊 Weekly Trend Summary (2-3 sentences summarizing macro trends)
- SUBJECT = Weekly Tech Digest - YYYY-MM-DD
- WORKSPACE = <your workspace path>
- SKILL_DIR = <your skill install path>
- DISCORD_CHANNEL_ID = <your channel id>
- EMAIL = (optional)
- LANGUAGE = English
- TEMPLATE = discord
Follow every step in the prompt template strictly. Do not skip any steps.
digest-prompt.md, not scattered across cron configsOpenClaw enforces cross-provider isolation : a single session can only send messages to one provider (e.g., Discord OR Telegram, not both). If you need to deliver digests to multiple platforms, create separate cron jobs for each provider:
# Job 1: Discord + Email
- DISCORD_CHANNEL_ID = <your-discord-channel-id>
- EMAIL = user@example.com
- TEMPLATE = discord
# Job 2: Telegram DM
- DISCORD_CHANNEL_ID = (none)
- EMAIL = (none)
- TEMPLATE = telegram
Replace DISCORD_CHANNEL_ID delivery with the target platform's delivery in the second job's prompt.
This is a security feature, not a bug — it prevents accidental cross-context data leakage.
This skill uses a prompt template pattern : the agent reads digest-prompt.md and follows its instructions. This is the standard OpenClaw skill execution model — the agent interprets structured instructions from skill-provided files. All instructions are shipped with the skill bundle and can be audited before installation.
The Python scripts make outbound requests to:
tech-news-digest-sources.json)api.x.com or api.twitterapi.io)api.search.brave.com)api.tavily.com)api.github.com)reddit.com)No data is sent to any other endpoints. All API keys are read from environment variables declared in the skill metadata.
Email delivery uses send-email.py which constructs proper MIME multipart messages with HTML body + optional PDF attachment. Subject formats are hardcoded (Daily Tech Digest - YYYY-MM-DD). PDF generation uses generate-pdf.py via weasyprint. The prompt template explicitly prohibits interpolating untrusted content (article titles, tweet text, etc.) into shell arguments. Email addresses and subjects must be static placeholder values only.
Scripts read from config/ and write to workspace/archive/. No files outside the workspace are accessed.
--verbose for detailsvalidate-config.py for specific issues--hours) and source enablementAll scripts support --verbose flag for detailed logging and troubleshooting.
MAX_WORKERS in scripts for your systemTIMEOUT for slow networksMAX_ARTICLES_PER_FEED based on needsThe digest prompt instructs agents to run Python scripts via shell commands. All script paths and arguments are skill-defined constants — no user input is interpolated into commands. Two scripts use subprocess:
run-pipeline.py orchestrates child fetch scripts (all within scripts/ directory)fetch-github.py has two subprocess calls:
openssl dgst -sha256 -sign for JWT signing (only if GH_APP_* env vars are set — signs a self-constructed JWT payload, no user content involved)gh auth token CLI fallback (only if gh is installed — reads from gh's own credential store)No user-supplied or fetched content is ever interpolated into subprocess arguments. Email delivery uses send-email.py which builds MIME messages programmatically — no shell interpolation. PDF generation uses generate-pdf.py via weasyprint. Email subjects are static format strings only — never constructed from fetched data.
Scripts do not directly read ~/.config/, ~/.ssh/, or any credential files. All API tokens are read from environment variables declared in the skill metadata. The GitHub auth cascade is:
$GITHUB_TOKEN env var (you control what to provide)GH_APP_ID, GH_APP_INSTALL_ID, and GH_APP_KEY_FILE — uses inline JWT signing via openssl CLI, no external scripts involved)gh auth token CLI (delegates to gh's own secure credential store)If you prefer no automatic credential discovery, simply set $GITHUB_TOKEN and the script will use it directly without attempting steps 2-3.
This skill does not install any packages. requirements.txt lists optional dependencies (feedparser, jsonschema) for reference only. All scripts work with Python 3.8+ standard library. Users should install optional deps in a virtualenv if desired — the skill never runs pip install.
Scripts make outbound HTTP requests to configured RSS feeds, Twitter API, GitHub API, Reddit JSON API, Brave Search API, and Tavily Search API. No inbound connections or listeners are created.
Weekly Installs
412
Repository
GitHub Stars
45
First Seen
Feb 18, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
github-copilot410
codex410
opencode410
gemini-cli409
kimi-cli409
amp409
AI 代码实施计划编写技能 | 自动化开发任务分解与 TDD 流程规划工具
41,400 周安装
Azure 诊断官方指南 - 调试排查生产问题、函数应用、容器应用故障排除
103,200 周安装
Azure存储服务全解析:Blob、文件、队列、表存储及Data Lake使用指南
103,300 周安装
Azure部署指南:从验证到执行,确保云应用成功上线
103,300 周安装
Azure成本优化技能:识别节省机会、清理孤立资源、调整规模 | Microsoft Copilot
103,300 周安装
Azure AI 服务指南:AI Search、Speech、OpenAI 与 MCP 工具使用教程
103,700 周安装
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
105,000 周安装