last30days by mvanhorn/last30days-skill
npx skills add https://github.com/mvanhorn/last30days-skill --skill last30days权限概述: 读取公共网络/平台数据,并可选择将研究简报保存到
~/Documents/Last30Days/。X/Twitter 搜索使用可选的用户提供的令牌(AUTH_TOKEN/CT0 环境变量)。Bluesky 搜索使用可选的应用程序密码(BSKY_HANDLE/BSKY_APP_PASSWORD 环境变量 - 在 bsky.app/settings/app-passwords 创建)。Truth Social 搜索使用可选的承载令牌(TRUTHSOCIAL_TOKEN 环境变量 - 从浏览器开发者工具提取)。所有凭据使用和数据写入均在“安全与权限”部分有记录。
研究 Reddit、X、Bluesky、Truth Social、YouTube、TikTok、Hacker News、Polymarket 和网络上的任何主题。揭示人们当前正在实际讨论、推荐、押注和辩论的内容。
在执行任何操作之前,解析用户的输入以确定:
常见模式:
[topic] for [tool] → “web mockups for Nano Banana Pro” → 广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
[topic] prompts for [tool] → “UI design prompts for Midjourney” → 工具已指定[topic] → “iOS design mockups” → 工具未指定,这没关系vs 或 versus 上分割)重要:在研究之前不要询问目标工具。
存储这些变量:
TOPIC = [提取的主题]TARGET_TOOL = [提取的工具,如果未指定则为 "unknown"]QUERY_TYPE = [RECOMMENDATIONS | NEWS | HOW-TO | COMPARISON | GENERAL]TOPIC_A = [第一个项目] (仅当 COMPARISON 时)TOPIC_B = [第二个项目] (仅当 COMPARISON 时)向用户显示你的解析结果。 在运行任何工具之前,输出:
I'll research {TOPIC} across Reddit, X, Bluesky, Truth Social, TikTok, and the web to find what's been discussed in the last 30 days.
Parsed intent:
- TOPIC = {TOPIC}
- TARGET_TOOL = {TARGET_TOOL or "unknown"}
- QUERY_TYPE = {QUERY_TYPE}
Research typically takes 2-8 minutes (niche topics take longer). Starting now.
如果 TARGET_TOOL 已知,请在介绍中提及它:“...to find {QUERY_TYPE}-style content for use in {TARGET_TOOL}.”
此文本必须在你调用任何工具之前出现。它向用户确认你理解了他们的请求。
如果 TOPIC 看起来可能拥有自己的 X/Twitter 账户 - 人物、创作者、品牌、产品、工具、公司、社区(例如,“Dor Brothers”、“Jason Calacanis”、“Nano Banana Pro”、“Seedance”、“Midjourney”),执行一次快速 WebSearch:
WebSearch("{TOPIC} X twitter handle site:x.com")
从结果中,提取他们的 X/Twitter 句柄。寻找:
x.com/{handle} 或 twitter.com/{handle}验证账户是真实的,不是模仿/粉丝账户。 检查:
如果你找到一个清晰、已验证的句柄,将其作为 --x-handle={handle} 传递(不带 @)。这将直接搜索该账户的帖子 - 找到他们发布但未提及自己姓名的内容。
在以下情况下跳过此步骤:
--quick 深度存储:RESOLVED_HANDLE = {handle or empty}
如果 ARGUMENTS 中出现 --agent(例如,/last30days plaud granola --agent):
AskUserQuestion 调用 - 如果未指定,则使用 TARGET_TOOL = "unknown"代理模式通过 --save-dir 自动将原始研究数据保存到 ~/Documents/Last30Days/(由脚本处理,无需额外的工具调用)。
代理模式报告格式:
## Research Report: {TOPIC}
Generated: {date} | Sources: Reddit, X, Bluesky, Truth Social, YouTube, TikTok, HN, Polymarket, Web
### Key Findings
[3-5 bullet points, highest-signal insights with citations]
### What I learned
{The full "What I learned" synthesis from normal output}
### Stats
{The standard stats block}
当用户询问“X vs Y”时,并行运行三个研究过程:
过程 1 + 2(并行 Bash 调用):
# Run BOTH of these as parallel Bash tool calls in a single message:
python3 "${SKILL_ROOT}/scripts/last30days.py" {TOPIC_A} --emit=compact --no-native-web --save-dir=~/Documents/Last30Days
python3 "${SKILL_ROOT}/scripts/last30days.py" {TOPIC_B} --emit=compact --no-native-web --save-dir=~/Documents/Last30Days
过程 3(在过程 1+2 完成后):
python3 "${SKILL_ROOT}/scripts/last30days.py" "{TOPIC_A} vs {TOPIC_B}" --emit=compact --no-native-web --save-dir=~/Documents/Last30Days
然后进行 WebSearch:{TOPIC_A} vs {TOPIC_B} comparison 2026 和 {TOPIC_A} vs {TOPIC_B} which is better。
跳过下面的正常步骤 1 - 直接进入比较综合格式(参见综合部分中的“如果 QUERY_TYPE = COMPARISON”)。
步骤 1:运行研究脚本(前台 — 不要将其放入后台)
关键:在前台运行此命令,超时时间为 5 分钟。不要使用 run_in_background。完整输出包含 Reddit、X 和 YouTube 数据,你需要完整阅读。
重要:脚本会自动处理 API 密钥/Codex 授权检测。 运行它并检查输出以确定模式。
# Find skill root — works in repo checkout, Claude Code, or Codex install
for dir in \
"." \
"${CLAUDE_PLUGIN_ROOT:-}" \
"${GEMINI_EXTENSION_DIR:-}" \
"$HOME/.claude/plugins/marketplaces/last30days-skill" \
"$HOME/.gemini/extensions/last30days-skill" \
"$HOME/.gemini/extensions/last30days" \
"$HOME/.claude/skills/last30days" \
"$HOME/.agents/skills/last30days" \
"$HOME/.codex/skills/last30days"; do
[ -n "$dir" ] && [ -f "$dir/scripts/last30days.py" ] && SKILL_ROOT="$dir" && break
done
if [ -z "${SKILL_ROOT:-}" ]; then
echo "ERROR: Could not find scripts/last30days.py" >&2
exit 1
fi
python3 "${SKILL_ROOT}/scripts/last30days.py" $ARGUMENTS --emit=compact --no-native-web --save-dir=~/Documents/Last30Days # Add --x-handle=HANDLE if RESOLVED_HANDLE is set
在 Bash 调用上使用 300000(5 分钟)的超时。脚本通常需要 1-3 分钟。
脚本将自动:
阅读整个输出。 它按顺序包含八个数据部分:Reddit 条目、X 条目、YouTube 条目、TikTok 条目、Instagram Reels 条目、Hacker News 条目、Polymarket 条目和 WebSearch 条目。如果你遗漏了部分,将产生不完整的统计数据。
输出中的 YouTube 条目如下所示: **{video_id}** (score:N) {channel_name} [N views, N likes],后跟标题、URL、转录亮点(从视频中预先提取的可引用片段)以及可折叠部分中的可选完整转录。在你的综合中直接引用这些亮点 - 它们是 Reddit 热门评论的 YouTube 等价物。将引用归属于频道名称。对它们进行计数,并将其包含在你的综合和统计块中。
输出中的 TikTok 条目如下所示: **{TK_id}** (score:N) @{creator} [N views, N likes],后跟字幕、URL、标签和可选的字幕片段。对它们进行计数,并将其包含在你的综合和统计块中。
输出中的 Instagram Reels 条目如下所示: **{IG_id}** (score:N) @{creator} (date) [N views, N likes],后跟字幕文本、URL 和可选转录。对它们进行计数,并将其包含在你的综合和统计块中。Instagram 提供了独特的创作者/影响者视角 — 将其与 TikTok 同等重视。
脚本完成后,进行 WebSearch 以补充博客、教程和新闻。
对于所有模式,进行 WebSearch 以补充(或在纯网络模式下提供所有数据)。
根据 QUERY_TYPE 选择搜索查询:
如果是 RECOMMENDATIONS(“best X”、“top X”、“what X should I use”):
best {TOPIC} recommendations{TOPIC} list examplesmost popular {TOPIC}如果是 NEWS(“what's happening with X”、“X news”):
{TOPIC} news 2026{TOPIC} announcement update如果是 PROMPTING(“X prompts”、“prompting for X”):
{TOPIC} prompts examples 2026{TOPIC} techniques tips如果是 GENERAL(默认):
{TOPIC} 2026{TOPIC} discussion对于所有查询类型:
选项(从用户命令传递而来):
--days=N → 回溯 N 天而不是 30 天(例如,--days=7 用于每周汇总)--quick → 更快,来源更少(每个 8-12 个)--deep → 全面(50-70 个 Reddit,40-60 个 X)所有搜索完成后,内部综合(先不要显示统计信息):
判断代理必须:
💬 Top comment (N upvotes))时,在你的综合中直接引用它。Reddit 的价值在于评论。[also on: Reddit, HN] 或类似标签时,意味着同一个故事出现在多个平台上。从这些跨平台发现入手 - 它们是研究中最重要的信号。关键:当 Polymarket 返回相关市场时,预测市场赔率是你研究中信号最强的数据点之一。 对结果下注的真实资金能穿透观点。将它们视为强有力的证据,而不是事后补充。
如何解释和综合 Polymarket 数据:
市场重要性排名的领域示例:
不要在此处显示统计信息 - 它们将在最后,邀请之前显示。
关键:让你的综合基于实际的研究内容,而不是你预先存在的知识。
仔细阅读研究输出。注意:
需要避免的反模式: 如果用户询问“clawdbot skills”而研究返回了 ClawdBot 内容(自托管 AI 代理),不要仅仅因为两者都涉及“skills”而将其综合为“Claude Code skills”。阅读研究实际所说的内容。
关键:提取具体名称,而不是通用模式。
当用户询问“best X”或“top X”时,他们想要一个具体事物的列表:
对于“best Claude Code skills”的错误综合:
“Skills are powerful. Keep them under 500 lines. Use progressive disclosure.”
对于“best Claude Code skills”的正确综合:
“Most mentioned skills: /commit (5 mentions), remotion skill (4x), git-worktree (3x), /pr (3x). The Remotion announcement got 16K likes on X.”
使用来自所有三个研究过程的数据,将输出结构化为并排比较:
# {TOPIC_A} vs {TOPIC_B}: What the Community Says (Last 30 Days)
## Quick Verdict
[1-2 sentence data-driven summary: which one the community prefers and why, with source counts]
## {TOPIC_A}
**Community Sentiment:** [Positive/Mixed/Negative] ({N} mentions across {sources})
**Strengths (what people love)**
- [Point 1 with source attribution]
- [Point 2]
**Weaknesses (common complaints)**
- [Point 1 with source attribution]
- [Point 2]
## {TOPIC_B}
**Community Sentiment:** [Positive/Mixed/Negative] ({N} mentions across {sources})
**Strengths (what people love)**
- [Point 1 with source attribution]
- [Point 2]
**Weaknesses (common complaints)**
- [Point 1 with source attribution]
- [Point 2]
## Head-to-Head
[Synthesis from the "A vs B" combined search - what people say when directly comparing]
| Dimension | {TOPIC_A} | {TOPIC_B} |
|-----------|-----------|-----------|
| [Key dimension 1] | [A's position] | [B's position] |
| [Key dimension 2] | [A's position] | [B's position] |
| [Key dimension 3] | [A's position] | [B's position] |
## The Bottom Line
Choose {TOPIC_A} if... Choose {TOPIC_B} if... (based on actual community data, not assumptions)
然后显示来自所有三个过程的综合统计数据以及标准的邀请部分。
从实际的研究输出中识别:
按此确切顺序显示:
首先 - 我学到了什么(基于 QUERY_TYPE):
如果是 RECOMMENDATIONS - 显示带有来源的具体提及事物:
🏆 Most mentioned:
[Tool Name] - {n}x mentions
Use Case: [what it does]
Sources: @handle1, @handle2, r/sub, blog.com
[Tool Name] - {n}x mentions
Use Case: [what it does]
Sources: @handle3, r/sub2, Complex
Notable mentions: [other specific things with 1-2 mentions]
对于 RECOMMENDATIONS 的关键要求:
如果是 PROMPTING/NEWS/GENERAL - 显示综合和模式:
引用规则:谨慎引用来源以证明研究是真实的。
引用优先级(从最优先到最不优先):
该工具的价值在于揭示人们在说什么,而不是记者写了什么。当网络文章和 X 帖子都涵盖同一事实时,引用 X 帖子。
URL 格式化:永远不要在输出中的任何地方粘贴原始 URL — 不在综合中,不在统计中,不在来源中。
🌐 Web: 10 pages — https://later.com/blog/..., https://buffer.com/...🌐 Web: 10 pages — Later, Buffer, CNN, SocialBee 使用出版物/网站名称,而不是 URL。用户不需要链接 — 他们需要干净、可读的文本。错误: “His album is set for March 20 (per Rolling Stone; Billboard; Complex).” 正确: “His album BULLY drops March 20 — fans on X are split on the tracklist, per @honest30bgfan_” 正确: “Ye's apology got massive traction on r/hiphopheads” 可以(网络,仅当 Reddit/X 没有时): “The Hellwatt Festival runs July 4-18 at RCF Arena, per Billboard”
以人为先,而非出版物。 每个主题都从 Reddit/X 用户在说什么/感受什么开始,然后仅在需要时添加网络背景。用户来这里是为了对话,而不是新闻稿。
What I learned:
**{Topic 1}** — [1-2 sentences about what people are saying, per @handle or r/sub]
**{Topic 2}** — [1-2 sentences, per @handle or r/sub]
**{Topic 3}** — [1-2 sentences, per @handle or r/sub]
KEY PATTERNS from the research:
1. [Pattern] — per @handle
2. [Pattern] — per r/sub
3. [Pattern] — per @handle
然后 - 统计信息(在邀请之前):
关键:根据研究输出计算实际总数。
[Xlikes, Yrt],从 Reddit 解析 [Xpts, Ycmt]完全复制此格式,仅替换 {占位符}:
---
✅ All agents reported back!
├─ 🟠 Reddit: {N} threads │ {N} upvotes │ {N} comments
├─ 🔵 X: {N} posts │ {N} likes │ {N} reposts
├─ 🔴 YouTube: {N} videos │ {N} views │ {N} with transcripts
├─ 🎵 TikTok: {N} videos │ {N} views │ {N} likes │ {N} with captions
├─ 📸 Instagram: {N} reels │ {N} views │ {N} likes │ {N} with captions
├─ 🟡 HN: {N} stories │ {N} points │ {N} comments
├─ 🦋 Bluesky: {N} posts │ {N} likes │ {N} reposts
├─ 🇺🇸 Truth Social: {N} posts │ {N} likes │ {N} reposts
├─ 📊 Polymarket: {N} markets │ {short summary of up to 5 most relevant market odds, e.g. "Championship: 12%, #1 Seed: 28%, Big 12: 64%, vs Kansas: 71%"}
├─ 🌐 Web: {N} pages — Source Name, Source Name, Source Name
└─ 🗣️ Top voices: @{handle1} ({N} likes), @{handle2} │ r/{sub1}, r/{sub2}
---
🌐 Web: 行 — 如何从 URL 中提取网站名称: 去掉协议、路径和 www. — 使用可识别的出版物名称:
https://later.com/blog/instagram-reels-trends/ → Laterhttps://socialbee.com/blog/instagram-trends/ → SocialBeehttps://buffer.com/resources/instagram-algorithms/ → Bufferhttps://www.cnn.com/2026/02/22/tech/... → CNNhttps://medium.com/the-ai-studio/... → Mediumhttps://radicaldatascience.wordpress.com/... → Radical Data Science 以逗号分隔的纯名称列出:Later, SocialBee, Buffer, CNN, Medium⚠️ WebSearch 引用 — 已满足。不要添加来源部分。 WebSearch 工具要求引用来源。该要求已通过 🌐 Web: 行上的来源名称完全满足。不要在响应末尾附加单独的“Sources:”部分。不要在任何地方列出 URL。🌐 Web: 行就是你的引用。无需更多内容。
关键:省略返回 0 个结果的任何来源行。 不要显示“0 threads”、“0 stories”、“0 markets”或“(no results this cycle)”。如果一个来源没有找到任何内容,完全删除该行 - 根本不要包含它。永远不要使用纯文本破折号 (-) 或管道符 (|)。始终使用 ├─ └─ │ 和表情符号。
显示前自检: 重读你的“我学到了什么”部分。它是否与研究所实际说的内容相符?如果你发现自己是在投射自己的知识而不是研究内容,请重写它。
最后 - 邀请(根据 QUERY_TYPE 调整):
关键:每个邀请必须包含 2-3 个基于你从研究中实际学到的具体示例建议。 不要泛泛而谈 — 通过引用结果中的真实内容,向用户展示你吸收了内容。
如果 QUERY_TYPE = PROMPTING:
---
I'm now an expert on {TOPIC} for {TARGET_TOOL}. What do you want to make? For example:
- [specific idea based on popular technique from research]
- [specific idea based on trending style/approach from research]
- [specific idea riffing on what people are actually creating]
Just describe your vision and I'll write a prompt you can paste straight into {TARGET_TOOL}.
如果 QUERY_TYPE = RECOMMENDATIONS:
---
I'm now an expert on {TOPIC}. Want me to go deeper? For example:
- [Compare specific item A vs item B from the results]
- [Explain why item C is trending right now]
- [Help you get started with item D]
如果 QUERY_TYPE = NEWS:
---
I'm now an expert on {TOPIC}. Some things you could ask:
- [Specific follow-up question about the biggest story]
- [Question about implications of a key development]
- [Question about what might happen next based on current trajectory]
如果 QUERY_TYPE = COMPARISON:
---
I've compared {TOPIC_A} vs {TOPIC_B} using the latest community data. Some things you could ask:
- [Deep dive into {TOPIC_A} alone with /last30days {TOPIC_A}]
- [Deep dive into {TOPIC_B} alone with /last30days {TOPIC_B}]
- [Focus on a specific dimension from the comparison table]
- [Look at a different time period with --days=7 or --days=90]
如果 QUERY_TYPE = GENERAL:
---
I'm now an expert on {TOPIC}. Some things I can help with:
- [Specific question based on the most discussed aspect]
- [Specific creative/practical application of what you learned]
- [Deeper dive into a pattern or debate from the research]
邀请示例(展示质量标准):
对于 /last30days nano banana pro prompts for Gemini:
I'm now an expert on Nano Banana Pro for Gemini. What do you want to make? For example:
- Photorealistic product shots with natural lighting (the most requested style right now)
- Logo designs with embedded text (Gemini's new strength per the research)
- Multi-reference style transfer from a mood board
Just describe your vision and I'll write a prompt you can paste straight into Gemini.
对于 /last30days kanye west (GENERAL):
I'm now an expert on Kanye West. Some things I can help with:
- What's the real story behind the apology letter — genuine or PR move?
- Break down the BULLY tracklist reactions and what fans are expecting
- Compare how Reddit vs X are reacting to the Bianca narrative
对于 /last30days war in Iran (NEWS):
I'm now an expert on the Iran situation. Some things you could ask:
- What are the realistic escalation scenarios from here?
- How is this playing differently in US vs international media?
- What's the economic impact on oil markets so far?
停止并等待用户响应。显示邀请后,不要调用任何工具。研究脚本已通过 --save-dir 将原始数据保存到 ~/Documents/Last30Days/。
阅读他们的响应并匹配意图:
只有当用户想要提示词时才编写提示词。 不要强迫给询问“伊朗接下来可能发生什么”的人写提示词。
当用户想要提示词时,利用你的研究专业知识编写一个高度定制化的提示词。
如果研究说要使用特定的提示词格式,你必须使用该格式。
反模式: 研究说“使用带有设备规格的 JSON 提示词”,但你写了纯文本。这完全违背了研究的目的。
Here's your prompt for {TARGET_TOOL}:
---
[The actual prompt IN THE FORMAT THE RESEARCH RECOMMENDS]
---
This uses [
Permissions overview: Reads public web/platform data and optionally saves research briefings to
~/Documents/Last30Days/. X/Twitter search uses optional user-provided tokens (AUTH_TOKEN/CT0 env vars). Bluesky search uses optional app password (BSKY_HANDLE/BSKY_APP_PASSWORD env vars - create at bsky.app/settings/app-passwords). Truth Social search uses optional bearer token (TRUTHSOCIAL_TOKEN env var - extract from browser dev tools). All credential usage and data writes are documented in the Security & Permissions section.
Research ANY topic across Reddit, X, Bluesky, Truth Social, YouTube, TikTok, Hacker News, Polymarket, and the web. Surface what people are actually discussing, recommending, betting on, and debating right now.
Before doing anything, parse the user's input for:
Common patterns:
[topic] for [tool] → "web mockups for Nano Banana Pro" → TOOL IS SPECIFIED[topic] prompts for [tool] → "UI design prompts for Midjourney" → TOOL IS SPECIFIED[topic] → "iOS design mockups" → TOOL NOT SPECIFIED, that's OKvs or versus with spaces)IMPORTANT: Do NOT ask about target tool before research.
Store these variables:
TOPIC = [extracted topic]TARGET_TOOL = [extracted tool, or "unknown" if not specified]QUERY_TYPE = [RECOMMENDATIONS | NEWS | HOW-TO | COMPARISON | GENERAL]TOPIC_A = [first item] (only if COMPARISON)TOPIC_B = [second item] (only if COMPARISON)DISPLAY your parsing to the user. Before running any tools, output:
I'll research {TOPIC} across Reddit, X, Bluesky, Truth Social, TikTok, and the web to find what's been discussed in the last 30 days.
Parsed intent:
- TOPIC = {TOPIC}
- TARGET_TOOL = {TARGET_TOOL or "unknown"}
- QUERY_TYPE = {QUERY_TYPE}
Research typically takes 2-8 minutes (niche topics take longer). Starting now.
If TARGET_TOOL is known, mention it in the intro: "...to find {QUERY_TYPE}-style content for use in {TARGET_TOOL}."
This text MUST appear before you call any tools. It confirms to the user that you understood their request.
If TOPIC looks like it could have its own X/Twitter account - people, creators, brands, products, tools, companies, communities (e.g., "Dor Brothers", "Jason Calacanis", "Nano Banana Pro", "Seedance", "Midjourney"), do ONE quick WebSearch:
WebSearch("{TOPIC} X twitter handle site:x.com")
From the results, extract their X/Twitter handle. Look for:
x.com/{handle} or twitter.com/{handle}Verify the account is real, not a parody/fan account. Check for:
If you find a clear, verified handle, pass it as --x-handle={handle} (without @). This searches that account's posts directly - finding content they posted that doesn't mention their own name.
Skip this step if:
--quick depthStore: RESOLVED_HANDLE = {handle or empty}
If --agent appears in ARGUMENTS (e.g., /last30days plaud granola --agent):
AskUserQuestion calls - use TARGET_TOOL = "unknown" if not specifiedAgent mode saves raw research data to ~/Documents/Last30Days/ automatically via --save-dir (handled by the script, no extra tool calls).
Agent mode report format:
## Research Report: {TOPIC}
Generated: {date} | Sources: Reddit, X, Bluesky, Truth Social, YouTube, TikTok, HN, Polymarket, Web
### Key Findings
[3-5 bullet points, highest-signal insights with citations]
### What I learned
{The full "What I learned" synthesis from normal output}
### Stats
{The standard stats block}
When the user asks "X vs Y", run THREE research passes in parallel:
Pass 1 + 2 (parallel Bash calls):
# Run BOTH of these as parallel Bash tool calls in a single message:
python3 "${SKILL_ROOT}/scripts/last30days.py" {TOPIC_A} --emit=compact --no-native-web --save-dir=~/Documents/Last30Days
python3 "${SKILL_ROOT}/scripts/last30days.py" {TOPIC_B} --emit=compact --no-native-web --save-dir=~/Documents/Last30Days
Pass 3 (after passes 1+2 complete):
python3 "${SKILL_ROOT}/scripts/last30days.py" "{TOPIC_A} vs {TOPIC_B}" --emit=compact --no-native-web --save-dir=~/Documents/Last30Days
Then do WebSearch for: {TOPIC_A} vs {TOPIC_B} comparison 2026 and {TOPIC_A} vs {TOPIC_B} which is better.
Skip the normal Step 1 below - go directly to the comparison synthesis format (see "If QUERY_TYPE = COMPARISON" in the synthesis section).
Step 1: Run the research script (FOREGROUND — do NOT background this)
CRITICAL: Run this command in the FOREGROUND with a 5-minute timeout. Do NOT use run_in_background. The full output contains Reddit, X, AND YouTube data that you need to read completely.
IMPORTANT: The script handles API key/Codex auth detection automatically. Run it and check the output to determine mode.
# Find skill root — works in repo checkout, Claude Code, or Codex install
for dir in \
"." \
"${CLAUDE_PLUGIN_ROOT:-}" \
"${GEMINI_EXTENSION_DIR:-}" \
"$HOME/.claude/plugins/marketplaces/last30days-skill" \
"$HOME/.gemini/extensions/last30days-skill" \
"$HOME/.gemini/extensions/last30days" \
"$HOME/.claude/skills/last30days" \
"$HOME/.agents/skills/last30days" \
"$HOME/.codex/skills/last30days"; do
[ -n "$dir" ] && [ -f "$dir/scripts/last30days.py" ] && SKILL_ROOT="$dir" && break
done
if [ -z "${SKILL_ROOT:-}" ]; then
echo "ERROR: Could not find scripts/last30days.py" >&2
exit 1
fi
python3 "${SKILL_ROOT}/scripts/last30days.py" $ARGUMENTS --emit=compact --no-native-web --save-dir=~/Documents/Last30Days # Add --x-handle=HANDLE if RESOLVED_HANDLE is set
Use a timeout of 300000 (5 minutes) on the Bash call. The script typically takes 1-3 minutes.
The script will automatically:
Read the ENTIRE output. It contains EIGHT data sections in this order: Reddit items, X items, YouTube items, TikTok items, Instagram Reels items, Hacker News items, Polymarket items, and WebSearch items. If you miss sections, you will produce incomplete stats.
YouTube items in the output look like: **{video_id}** (score:N) {channel_name} [N views, N likes] followed by a title, URL, transcript highlights (pre-extracted quotable excerpts from the video), and an optional full transcript in a collapsible section. Quote the highlights directly in your synthesis - they are the YouTube equivalent of Reddit top comments. Attribute quotes to the channel name. Count them and include them in your synthesis and stats block.
TikTok items in the output look like: **{TK_id}** (score:N) @{creator} [N views, N likes] followed by a caption, URL, hashtags, and optional caption snippet. Count them and include them in your synthesis and stats block.
Instagram Reels items in the output look like: **{IG_id}** (score:N) @{creator} (date) [N views, N likes] followed by caption text, URL, and optional transcript. Count them and include them in your synthesis and stats block. Instagram provides unique creator/influencer perspective — weight it alongside TikTok.
After the script finishes, do WebSearch to supplement with blogs, tutorials, and news.
For ALL modes , do WebSearch to supplement (or provide all data in web-only mode).
Choose search queries based on QUERY_TYPE:
If RECOMMENDATIONS ("best X", "top X", "what X should I use"):
best {TOPIC} recommendations{TOPIC} list examplesmost popular {TOPIC}If NEWS ("what's happening with X", "X news"):
{TOPIC} news 2026{TOPIC} announcement updateIf PROMPTING ("X prompts", "prompting for X"):
{TOPIC} prompts examples 2026{TOPIC} techniques tipsIf GENERAL (default):
{TOPIC} 2026{TOPIC} discussionFor ALL query types:
Options (passed through from user's command):
--days=N → Look back N days instead of 30 (e.g., --days=7 for weekly roundup)--quick → Faster, fewer sources (8-12 each)--deep → Comprehensive (50-70 Reddit, 40-60 X)After all searches complete, internally synthesize (don't display stats yet):
The Judge Agent must:
Weight Reddit/X sources HIGHER (they have engagement signals: upvotes, likes)
Weight YouTube sources HIGH (they have views, likes, and transcript content)
Weight TikTok sources HIGH (they have views, likes, and caption content — viral signal)
Weight WebSearch sources LOWER (no engagement data)
For Reddit: Pay special attention to top comments — they often contain the wittiest, most insightful, or funniest take. When a top comment has high upvotes (shown as 💬 Top comment (N upvotes)), quote it directly in your synthesis. Reddit's value is in the comments.
For YouTube: Quote transcript highlights directly in your synthesis. These are pre-extracted key moments from the video - treat them like Reddit top comments. Attribute to the channel name and include the actual quote. YouTube's value is in what creators SAY, not just their view counts.
Identify patterns that appear across ALL sources (strongest signals)
Note any contradictions between sources
Extract the top 3-5 actionable insights
Cross-platform signals are the strongest evidence. When items have [also on: Reddit, HN] or similar tags, it means the same story appears across multiple platforms. Lead with these cross-platform findings - they're the most important signals in the research.
CRITICAL: When Polymarket returns relevant markets, prediction market odds are among the highest-signal data points in your research. Real money on outcomes cuts through opinion. Treat them as strong evidence, not an afterthought.
How to interpret and synthesize Polymarket data:
Prefer structural/long-term markets over near-term deadlines. Championship odds > regular season title. Regime change > near-term strike deadline. IPO/major milestone > incremental update. Presidency > individual state primary. When multiple markets exist, the bigger question is more interesting to the user.
When the topic is an outcome in a multi-outcome market, call out that specific outcome's odds and movement. Don't just say "Polymarket has a #1 seed market" - say "Arizona has a 28% chance of being the #1 overall seed, up 10% this month." The user cares about THEIR topic's position in the market.
Weave odds into the narrative as supporting evidence. Don't isolate Polymarket data in its own paragraph. Instead: "Final Four buzz is building - Polymarket gives Arizona a 12% chance to win the championship (up 3% this week), and 28% to earn a #1 seed."
Citation format: Always include specific odds AND movement. "Polymarket has Arizona at 28% for a #1 seed (up 10% this month)" - not just "per Polymarket."
When multiple relevant markets exist, highlight 3-5 of the most interesting ones in your synthesis, ordered by importance (structural > near-term). Don't just pick the highest-volume one.
Domain examples of market importance ranking:
Do NOT display stats here - they come at the end, right before the invitation.
CRITICAL: Ground your synthesis in the ACTUAL research content, not your pre-existing knowledge.
Read the research output carefully. Pay attention to:
ANTI-PATTERN TO AVOID : If user asks about "clawdbot skills" and research returns ClawdBot content (self-hosted AI agent), do NOT synthesize this as "Claude Code skills" just because both involve "skills". Read what the research actually says.
CRITICAL: Extract SPECIFIC NAMES, not generic patterns.
When user asks "best X" or "top X", they want a LIST of specific things:
BAD synthesis for "best Claude Code skills":
"Skills are powerful. Keep them under 500 lines. Use progressive disclosure."
GOOD synthesis for "best Claude Code skills":
"Most mentioned skills: /commit (5 mentions), remotion skill (4x), git-worktree (3x), /pr (3x). The Remotion announcement got 16K likes on X."
Structure the output as a side-by-side comparison using data from all three research passes:
# {TOPIC_A} vs {TOPIC_B}: What the Community Says (Last 30 Days)
## Quick Verdict
[1-2 sentence data-driven summary: which one the community prefers and why, with source counts]
## {TOPIC_A}
**Community Sentiment:** [Positive/Mixed/Negative] ({N} mentions across {sources})
**Strengths (what people love)**
- [Point 1 with source attribution]
- [Point 2]
**Weaknesses (common complaints)**
- [Point 1 with source attribution]
- [Point 2]
## {TOPIC_B}
**Community Sentiment:** [Positive/Mixed/Negative] ({N} mentions across {sources})
**Strengths (what people love)**
- [Point 1 with source attribution]
- [Point 2]
**Weaknesses (common complaints)**
- [Point 1 with source attribution]
- [Point 2]
## Head-to-Head
[Synthesis from the "A vs B" combined search - what people say when directly comparing]
| Dimension | {TOPIC_A} | {TOPIC_B} |
|-----------|-----------|-----------|
| [Key dimension 1] | [A's position] | [B's position] |
| [Key dimension 2] | [A's position] | [B's position] |
| [Key dimension 3] | [A's position] | [B's position] |
## The Bottom Line
Choose {TOPIC_A} if... Choose {TOPIC_B} if... (based on actual community data, not assumptions)
Then show combined stats from all three passes and the standard invitation section.
Identify from the ACTUAL RESEARCH OUTPUT:
Display in this EXACT sequence:
FIRST - What I learned (based on QUERY_TYPE):
If RECOMMENDATIONS - Show specific things mentioned with sources:
🏆 Most mentioned:
[Tool Name] - {n}x mentions
Use Case: [what it does]
Sources: @handle1, @handle2, r/sub, blog.com
[Tool Name] - {n}x mentions
Use Case: [what it does]
Sources: @handle3, r/sub2, Complex
Notable mentions: [other specific things with 1-2 mentions]
CRITICAL for RECOMMENDATIONS:
If PROMPTING/NEWS/GENERAL - Show synthesis and patterns:
CITATION RULE: Cite sources sparingly to prove research is real.
CITATION PRIORITY (most to least preferred):
The tool's value is surfacing what PEOPLE are saying, not what journalists wrote. When both a web article and an X post cover the same fact, cite the X post.
URL FORMATTING: NEVER paste raw URLs anywhere in the output — not in synthesis, not in stats, not in sources.
🌐 Web: 10 pages — https://later.com/blog/..., https://buffer.com/...🌐 Web: 10 pages — Later, Buffer, CNN, SocialBee Use the publication/site name, not the URL. The user doesn't need links — they need clean, readable text.BAD: "His album is set for March 20 (per Rolling Stone; Billboard; Complex)." GOOD: "His album BULLY drops March 20 — fans on X are split on the tracklist, per @honest30bgfan_" GOOD: "Ye's apology got massive traction on r/hiphopheads" OK (web, only when Reddit/X don't have it): "The Hellwatt Festival runs July 4-18 at RCF Arena, per Billboard"
Lead with people, not publications. Start each topic with what Reddit/X users are saying/feeling, then add web context only if needed. The user came here for the conversation, not the press release.
What I learned:
**{Topic 1}** — [1-2 sentences about what people are saying, per @handle or r/sub]
**{Topic 2}** — [1-2 sentences, per @handle or r/sub]
**{Topic 3}** — [1-2 sentences, per @handle or r/sub]
KEY PATTERNS from the research:
1. [Pattern] — per @handle
2. [Pattern] — per r/sub
3. [Pattern] — per @handle
THEN - Stats (right before invitation):
CRITICAL: Calculate actual totals from the research output.
[Xlikes, Yrt] from each X post, [Xpts, Ycmt] from RedditCopy this EXACTLY, replacing only the {placeholders}:
---
✅ All agents reported back!
├─ 🟠 Reddit: {N} threads │ {N} upvotes │ {N} comments
├─ 🔵 X: {N} posts │ {N} likes │ {N} reposts
├─ 🔴 YouTube: {N} videos │ {N} views │ {N} with transcripts
├─ 🎵 TikTok: {N} videos │ {N} views │ {N} likes │ {N} with captions
├─ 📸 Instagram: {N} reels │ {N} views │ {N} likes │ {N} with captions
├─ 🟡 HN: {N} stories │ {N} points │ {N} comments
├─ 🦋 Bluesky: {N} posts │ {N} likes │ {N} reposts
├─ 🇺🇸 Truth Social: {N} posts │ {N} likes │ {N} reposts
├─ 📊 Polymarket: {N} markets │ {short summary of up to 5 most relevant market odds, e.g. "Championship: 12%, #1 Seed: 28%, Big 12: 64%, vs Kansas: 71%"}
├─ 🌐 Web: {N} pages — Source Name, Source Name, Source Name
└─ 🗣️ Top voices: @{handle1} ({N} likes), @{handle2} │ r/{sub1}, r/{sub2}
---
🌐 Web: line — how to extract site names from URLs: Strip the protocol, path, and www. — use the recognizable publication name:
https://later.com/blog/instagram-reels-trends/ → Laterhttps://socialbee.com/blog/instagram-trends/ → SocialBeehttps://buffer.com/resources/instagram-algorithms/ → Bufferhttps://www.cnn.com/2026/02/22/tech/... → CNNhttps://medium.com/the-ai-studio/... → Mediumhttps://radicaldatascience.wordpress.com/... → Radical Data Science List as comma-separated plain names: Later, SocialBee, Buffer, CNN, Medium⚠️ WebSearch citation — ALREADY SATISFIED. DO NOT ADD A SOURCES SECTION. The WebSearch tool mandates source citation. That requirement is FULLY satisfied by the source names on the 🌐 Web: line above. Do NOT append a separate "Sources:" section at the end of your response. Do NOT list URLs anywhere. The 🌐 Web: line IS your citation. Nothing more is needed.
CRITICAL: Omit any source line that returned 0 results. Do NOT show "0 threads", "0 stories", "0 markets", or "(no results this cycle)". If a source found nothing, DELETE that line entirely - don't include it at all. NEVER use plain text dashes (-) or pipe (|). ALWAYS use ├─ └─ │ and the emoji.
SELF-CHECK before displaying : Re-read your "What I learned" section. Does it match what the research ACTUALLY says? If you catch yourself projecting your own knowledge instead of the research, rewrite it.
LAST - Invitation (adapt to QUERY_TYPE):
CRITICAL: Every invitation MUST include 2-3 specific example suggestions based on what you ACTUALLY learned from the research. Don't be generic — show the user you absorbed the content by referencing real things from the results.
If QUERY_TYPE = PROMPTING:
---
I'm now an expert on {TOPIC} for {TARGET_TOOL}. What do you want to make? For example:
- [specific idea based on popular technique from research]
- [specific idea based on trending style/approach from research]
- [specific idea riffing on what people are actually creating]
Just describe your vision and I'll write a prompt you can paste straight into {TARGET_TOOL}.
If QUERY_TYPE = RECOMMENDATIONS:
---
I'm now an expert on {TOPIC}. Want me to go deeper? For example:
- [Compare specific item A vs item B from the results]
- [Explain why item C is trending right now]
- [Help you get started with item D]
If QUERY_TYPE = NEWS:
---
I'm now an expert on {TOPIC}. Some things you could ask:
- [Specific follow-up question about the biggest story]
- [Question about implications of a key development]
- [Question about what might happen next based on current trajectory]
If QUERY_TYPE = COMPARISON:
---
I've compared {TOPIC_A} vs {TOPIC_B} using the latest community data. Some things you could ask:
- [Deep dive into {TOPIC_A} alone with /last30days {TOPIC_A}]
- [Deep dive into {TOPIC_B} alone with /last30days {TOPIC_B}]
- [Focus on a specific dimension from the comparison table]
- [Look at a different time period with --days=7 or --days=90]
If QUERY_TYPE = GENERAL:
---
I'm now an expert on {TOPIC}. Some things I can help with:
- [Specific question based on the most discussed aspect]
- [Specific creative/practical application of what you learned]
- [Deeper dive into a pattern or debate from the research]
Example invitations (to show the quality bar):
For /last30days nano banana pro prompts for Gemini:
I'm now an expert on Nano Banana Pro for Gemini. What do you want to make? For example:
- Photorealistic product shots with natural lighting (the most requested style right now)
- Logo designs with embedded text (Gemini's new strength per the research)
- Multi-reference style transfer from a mood board
Just describe your vision and I'll write a prompt you can paste straight into Gemini.
For /last30days kanye west (GENERAL):
I'm now an expert on Kanye West. Some things I can help with:
- What's the real story behind the apology letter — genuine or PR move?
- Break down the BULLY tracklist reactions and what fans are expecting
- Compare how Reddit vs X are reacting to the Bianca narrative
For /last30days war in Iran (NEWS):
I'm now an expert on the Iran situation. Some things you could ask:
- What are the realistic escalation scenarios from here?
- How is this playing differently in US vs international media?
- What's the economic impact on oil markets so far?
STOP and wait for the user to respond. Do NOT call any tools after displaying the invitation. The research script already saved raw data to ~/Documents/Last30Days/ via --save-dir.
Read their response and match the intent:
Only write a prompt when the user wants one. Don't force a prompt on someone who asked "what could happen next with Iran."
When the user wants a prompt, write a single, highly-tailored prompt using your research expertise.
If research says to use a specific prompt FORMAT, YOU MUST USE THAT FORMAT.
ANTI-PATTERN : Research says "use JSON prompts with device specs" but you write plain prose. This defeats the entire purpose of the research.
Here's your prompt for {TARGET_TOOL}:
---
[The actual prompt IN THE FORMAT THE RESEARCH RECOMMENDS]
---
This uses [brief 1-line explanation of what research insight you applied].
Only if they ask for alternatives or more prompts, provide 2-3 variations. Don't dump a prompt pack unless requested.
After delivering a prompt, offer to write more:
Want another prompt? Just tell me what you're creating next.
For the rest of this conversation, remember:
CRITICAL: After research is complete, treat yourself as an EXPERT on this topic.
When the user asks follow-up questions:
Only do new research if the user explicitly asks about a DIFFERENT topic.
After delivering a prompt, end with:
---
📚 Expert in: {TOPIC} for {TARGET_TOOL}
📊 Based on: {n} Reddit threads ({sum} upvotes) + {n} X posts ({sum} likes) + {n} YouTube videos ({sum} views) + {n} TikTok videos ({sum} views) + {n} Instagram reels ({sum} views) + {n} HN stories ({sum} points) + {n} web pages
Want another prompt? Just tell me what you're creating next.
What this skill does:
api.scrapecreators.com) for Reddit search, subreddit discovery, and comment enrichment (requires SCRAPECREATORS_API_KEY — same key as TikTok + Instagram)api.openai.com) for Reddit discovery (fallback if no SCRAPECREATORS_API_KEY)api.x.ai) for X searchhn.algolia.com) for Hacker News story and comment discovery (free, no auth)gamma-api.polymarket.com) for prediction market discovery (free, no auth)yt-dlp locally for YouTube search and transcript extraction (no API key, public data)api.scrapecreators.com) for TikTok and Instagram search, transcript/caption extraction (same SCRAPECREATORS_API_KEY as Reddit, PAYG after 100 free credits)What this skill does NOT do:
--agent for non-interactive report outputBundled scripts: scripts/last30days.py (main research engine), scripts/lib/ (search, enrichment, rendering modules), scripts/lib/vendor/bird-search/ (vendored X search client, MIT licensed)
Review scripts before first use to verify behavior.
Weekly Installs
995
Repository
GitHub Stars
4.7K
First Seen
Feb 7, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
claude-code826
opencode600
codex589
gemini-cli584
github-copilot582
cursor573
AI Elements:基于shadcn/ui的AI原生应用组件库,快速构建对话界面
53,500 周安装
YouTube视频分析师 - 逆向分析病毒内容公式,提取钩子、留存机制与情感触发点
647 周安装
SQLiteData 使用指南:SwiftData 轻量级替代方案,支持 CloudKit 同步
CTF密码学挑战速查指南 | 经典/现代密码攻击、RSA/ECC/流密码实战技巧
648 周安装
Bitrefill CLI:让AI智能体自主购买数字商品,支持加密货币支付
Bilibili 字幕提取工具 - 支持 AI 字幕检测与 ASR 转录,一键下载视频字幕
648 周安装
assistant-ui thread-list 线程列表:管理多聊天线程的 React AI SDK 组件
649 周安装
reddit.com for engagement metrics