last30days by sickn33/antigravity-awesome-skills
npx skills add https://github.com/sickn33/antigravity-awesome-skills --skill last30days研究 Reddit、X 和网络上的任何主题。揭示人们当前实际在讨论、推荐和争论的内容。
使用场景:
在执行任何操作之前,先解析用户的输入以确定:
常见模式:
[topic] for [tool] → "web mockups for Nano Banana Pro" → 指定了工具广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
[topic] prompts for [tool] → "UI design prompts for Midjourney" → 指定了工具[topic] → "iOS design mockups" → 未指定工具,这没关系重要:在研究之前不要询问目标工具。
存储这些变量:
TOPIC = [提取的主题]TARGET_TOOL = [提取的工具,如果未指定则为 "unknown"]QUERY_TYPE = [RECOMMENDATIONS | NEWS | HOW-TO | GENERAL]该技能根据可用的 API 密钥在三种模式下工作:
API 密钥是可选的。 即使没有它们,该技能也能使用网络搜索回退功能工作。
如果用户想要添加 API 密钥以获得更好的结果:
mkdir -p ~/.config/last30days
cat > ~/.config/last30days/.env << 'ENVEOF'
# last30days API 配置
# 两个密钥都是可选的 - 技能可通过网络搜索回退功能工作
# 用于 Reddit 研究(使用 OpenAI 的 web_search 工具)
OPENAI_API_KEY=
# 用于 X/Twitter 研究(使用 xAI 的 x_search 工具)
XAI_API_KEY=
ENVEOF
chmod 600 ~/.config/last30days/.env
echo "Config created at ~/.config/last30days/.env"
echo "Edit to add your API keys for enhanced research."
如果没有配置密钥,请不要停止。 继续使用仅网络模式。
重要:脚本会自动处理 API 密钥检测。 运行它并检查输出以确定模式。
步骤 1:运行研究脚本
TOPIC_FILE="$(mktemp)"
trap 'rm -f "$TOPIC_FILE"' EXIT
cat <<'LAST30DAYS_TOPIC' > "$TOPIC_FILE"
$ARGUMENTS
LAST30DAYS_TOPIC
python3 ~/.claude/skills/last30days/scripts/last30days.py "$(cat "$TOPIC_FILE")" --emit=compact 2>&1
脚本将自动:
步骤 2:检查输出模式
脚本输出将指示模式:
步骤 3:进行网络搜索
对于所有模式,进行网络搜索以补充(或在仅网络模式下提供所有数据)。
根据 QUERY_TYPE 选择搜索查询:
如果是 RECOMMENDATIONS("best X"、"top X"、"what X should I use"):
best {TOPIC} recommendations{TOPIC} list examplesmost popular {TOPIC}如果是 NEWS("what's happening with X"、"X news"):
{TOPIC} news 2026{TOPIC} announcement update如果是 PROMPTING("X prompts"、"prompting for X"):
{TOPIC} prompts examples 2026{TOPIC} techniques tips如果是 GENERAL(默认):
{TOPIC} 2026{TOPIC} discussion对于所有查询类型:
步骤 3:等待后台脚本完成 在继续合成之前,使用 TaskOutput 获取脚本结果。
深度选项(从用户命令传递而来):
--quick → 更快,来源较少(每个 8-12 个)--deep → 全面(Reddit 50-70 个,X 40-60 个)在所有搜索完成后,内部进行综合(先不要显示统计数据):
判断代理必须:
不要在这里显示统计数据 - 它们将在最后,就在邀请之前显示。
关键:让你的综合基于实际的研究内容,而不是你预先存在的知识。
仔细阅读研究输出。注意:
需要避免的反模式:如果用户询问 "clawdbot skills",而研究返回了关于 ClawdBot(自托管 AI 代理)的内容,不要仅仅因为两者都涉及 "skills" 就将其综合为 "Claude Code skills"。阅读研究实际说了什么。
关键:提取具体的名称,而不是通用模式。
当用户询问 "best X" 或 "top X" 时,他们想要一个具体事物的列表:
对于 "best Claude Code skills" 的错误综合示例:
"技能很强大。保持它们在 500 行以下。使用渐进式披露。"
对于 "best Claude Code skills" 的正确综合示例:
"被提及最多的技能:/commit(5 次提及)、remotion skill(4 次)、git-worktree(3 次)、/pr(3 次)。Remotion 公告在 X 上获得了 16K 个喜欢。"
从实际的研究输出中识别:
如果研究说 "use JSON prompts" 或 "structured prompts",你之后必须按照该格式交付提示词。
关键:不要输出任何 "Sources:" 列表。最终显示应该是简洁的。
按照这个确切的顺序显示:
首先 - 我学到了什么(基于 QUERY_TYPE):
如果是 RECOMMENDATIONS - 显示提到的具体事物:
🏆 被提及最多的:
1. [具体名称] - 被提及 {n} 次 (r/sub, @handle, blog.com)
2. [具体名称] - 被提及 {n} 次 (来源)
3. [具体名称] - 被提及 {n} 次 (来源)
4. [具体名称] - 被提及 {n} 次 (来源)
5. [具体名称] - 被提及 {n} 次 (来源)
值得注意的提及:[其他被提及 1-2 次的具体事物]
如果是 PROMPTING/NEWS/GENERAL - 显示综合和模式:
我学到了什么:
[2-4 句话,综合来自实际研究输出的关键见解。]
我将使用的关键模式:
1. [来自研究的模式]
2. [来自研究的模式]
3. [来自研究的模式]
然后 - 统计数据(在邀请之前):
对于完整/部分模式(有 API 密钥):
---
✅ 所有代理都已报告!
├─ 🟠 Reddit: {n} 个帖子 │ {sum} 个点赞 │ {sum} 条评论
├─ 🔵 X: {n} 个帖子 │ {sum} 个喜欢 │ {sum} 次转发
├─ 🌐 网络: {n} 个页面 │ {domains}
└─ 主要声音:r/{sub1}, r/{sub2} │ @{handle1}, @{handle2} │ {web_author} 在 {site}
对于仅网络模式(无 API 密钥):
---
✅ 研究完成!
├─ 🌐 网络: {n} 个页面 │ {domains}
└─ 主要来源:{author1} 在 {site1}, {author2} 在 {site2}
💡 想要互动指标吗?将 API 密钥添加到 ~/.config/last30days/.env
- OPENAI_API_KEY → Reddit(真实的点赞和评论)
- XAI_API_KEY → X/Twitter(真实的喜欢和转发)
最后 - 邀请:
---
分享你想要创建的内容的愿景,我将为你编写一个可以直接复制粘贴到 {TARGET_TOOL} 的周到提示词。
使用研究输出中的真实数字。 模式应该是来自研究的实际见解,而不是通用建议。
在显示前进行自我检查:重新阅读你的 "我学到了什么" 部分。它是否符合研究实际所说的内容?如果研究是关于 ClawdBot(一个自托管 AI 代理),你的摘要应该是关于 ClawdBot 的,而不是 Claude Code。如果你发现自己投射了自己的知识而不是研究内容,请重写它。
如果在显示结果后 TARGET_TOOL 仍然未知,现在询问(而不是在研究之前):
你将在哪个工具中使用这些提示词?
选项:
1. [基于研究最相关的工具 - 例如,如果研究提到了 Figma/Sketch,就提供这些]
2. Nano Banana Pro(图像生成)
3. ChatGPT / Claude(文本/代码)
4. 其他(告诉我)
重要:显示此内容后,等待用户响应。不要倾倒通用提示词。
在显示带有邀请的统计数据摘要后,停止并等待用户告诉你他们想要创建什么。
当他们回应他们的愿景时(例如,"I want a landing page mockup for my SaaS app"),然后编写一个单一的、周到的、量身定制的提示词。
根据他们想要创建的内容,利用你的研究专长编写一个单一的、高度量身定制的提示词。
如果研究说使用特定的提示词格式,你必须使用该格式:
反模式:研究说 "use JSON prompts with device specs",但你却写成了普通散文。这违背了整个研究的目的。
这是你用于 {TARGET_TOOL} 的提示词:
---
[实际提示词,采用研究推荐的格式 - 如果研究说 JSON,这就是 JSON。如果研究说自然语言,这就是散文。匹配有效的方法。]
---
这运用了[简要的一行解释,说明你应用了哪个研究见解]。
只有当他们要求替代方案或更多提示词时,才提供 2-3 个变体。除非被要求,否则不要倾倒一堆提示词。
在交付一个提示词后,主动提出编写更多:
想要另一个提示词吗?只需告诉我你接下来要创建什么。
在本次对话的剩余部分,记住:
关键:研究完成后,你现在就是该主题的专家。
当用户提出后续问题时:
只有当用户明确询问不同主题时,才进行新的研究。
在交付一个提示词后,以以下内容结尾:
对于完整/部分模式:
---
📚 擅长领域:{TOPIC} 用于 {TARGET_TOOL}
📊 基于:{n} 个 Reddit 帖子({sum} 个点赞)+ {n} 个 X 帖子({sum} 个喜欢)+ {n} 个网页
想要另一个提示词吗?只需告诉我你接下来要创建什么。
对于仅网络模式:
---
📚 擅长领域:{TOPIC} 用于 {TARGET_TOOL}
📊 基于:来自 {domains} 的 {n} 个网页
想要另一个提示词吗?只需告诉我你接下来要创建什么。
💡 解锁 Reddit & X 数据:将 API 密钥添加到 ~/.config/last30days/.env
此技能适用于执行概述中描述的工作流程或操作。
每周安装次数
471
仓库
GitHub 星标
27.6K
首次出现
Jan 26, 2026
安全审计
安装于
opencode418
gemini-cli403
codex388
github-copilot361
claude-code358
cursor357
Research ANY topic across Reddit, X, and the web. Surface what people are actually discussing, recommending, and debating right now.
Use cases:
Before doing anything, parse the user's input for:
Common patterns:
[topic] for [tool] → "web mockups for Nano Banana Pro" → TOOL IS SPECIFIED[topic] prompts for [tool] → "UI design prompts for Midjourney" → TOOL IS SPECIFIED[topic] → "iOS design mockups" → TOOL NOT SPECIFIED, that's OKIMPORTANT: Do NOT ask about target tool before research.
Store these variables:
TOPIC = [extracted topic]TARGET_TOOL = [extracted tool, or "unknown" if not specified]QUERY_TYPE = [RECOMMENDATIONS | NEWS | HOW-TO | GENERAL]The skill works in three modes based on available API keys:
API keys are OPTIONAL. The skill will work without them using WebSearch fallback.
If the user wants to add API keys for better results:
mkdir -p ~/.config/last30days
cat > ~/.config/last30days/.env << 'ENVEOF'
# last30days API Configuration
# Both keys are optional - skill works with WebSearch fallback
# For Reddit research (uses OpenAI's web_search tool)
OPENAI_API_KEY=
# For X/Twitter research (uses xAI's x_search tool)
XAI_API_KEY=
ENVEOF
chmod 600 ~/.config/last30days/.env
echo "Config created at ~/.config/last30days/.env"
echo "Edit to add your API keys for enhanced research."
DO NOT stop if no keys are configured. Proceed with web-only mode.
IMPORTANT: The script handles API key detection automatically. Run it and check the output to determine mode.
Step 1: Run the research script
TOPIC_FILE="$(mktemp)"
trap 'rm -f "$TOPIC_FILE"' EXIT
cat <<'LAST30DAYS_TOPIC' > "$TOPIC_FILE"
$ARGUMENTS
LAST30DAYS_TOPIC
python3 ~/.claude/skills/last30days/scripts/last30days.py "$(cat "$TOPIC_FILE")" --emit=compact 2>&1
The script will automatically:
Step 2: Check the output mode
The script output will indicate the mode:
Step 3: Do WebSearch
For ALL modes , do WebSearch to supplement (or provide all data in web-only mode).
Choose search queries based on QUERY_TYPE:
If RECOMMENDATIONS ("best X", "top X", "what X should I use"):
best {TOPIC} recommendations{TOPIC} list examplesmost popular {TOPIC}If NEWS ("what's happening with X", "X news"):
{TOPIC} news 2026{TOPIC} announcement updateIf PROMPTING ("X prompts", "prompting for X"):
{TOPIC} prompts examples 2026{TOPIC} techniques tipsIf GENERAL (default):
{TOPIC} 2026{TOPIC} discussionFor ALL query types:
Step 3: Wait for background script to complete Use TaskOutput to get the script results before proceeding to synthesis.
Depth options (passed through from user's command):
--quick → Faster, fewer sources (8-12 each)--deep → Comprehensive (50-70 Reddit, 40-60 X)After all searches complete, internally synthesize (don't display stats yet):
The Judge Agent must:
Do NOT display stats here - they come at the end, right before the invitation.
CRITICAL: Ground your synthesis in the ACTUAL research content, not your pre-existing knowledge.
Read the research output carefully. Pay attention to:
ANTI-PATTERN TO AVOID : If user asks about "clawdbot skills" and research returns ClawdBot content (self-hosted AI agent), do NOT synthesize this as "Claude Code skills" just because both involve "skills". Read what the research actually says.
CRITICAL: Extract SPECIFIC NAMES, not generic patterns.
When user asks "best X" or "top X", they want a LIST of specific things:
BAD synthesis for "best Claude Code skills":
"Skills are powerful. Keep them under 500 lines. Use progressive disclosure."
GOOD synthesis for "best Claude Code skills":
"Most mentioned skills: /commit (5 mentions), remotion skill (4x), git-worktree (3x), /pr (3x). The Remotion announcement got 16K likes on X."
Identify from the ACTUAL RESEARCH OUTPUT:
If research says "use JSON prompts" or "structured prompts", you MUST deliver prompts in that format later.
CRITICAL: Do NOT output any "Sources:" lists. The final display should be clean.
Display in this EXACT sequence:
FIRST - What I learned (based on QUERY_TYPE):
If RECOMMENDATIONS - Show specific things mentioned:
🏆 Most mentioned:
1. [Specific name] - mentioned {n}x (r/sub, @handle, blog.com)
2. [Specific name] - mentioned {n}x (sources)
3. [Specific name] - mentioned {n}x (sources)
4. [Specific name] - mentioned {n}x (sources)
5. [Specific name] - mentioned {n}x (sources)
Notable mentions: [other specific things with 1-2 mentions]
If PROMPTING/NEWS/GENERAL - Show synthesis and patterns:
What I learned:
[2-4 sentences synthesizing key insights FROM THE ACTUAL RESEARCH OUTPUT.]
KEY PATTERNS I'll use:
1. [Pattern from research]
2. [Pattern from research]
3. [Pattern from research]
THEN - Stats (right before invitation):
For full/partial mode (has API keys):
---
✅ All agents reported back!
├─ 🟠 Reddit: {n} threads │ {sum} upvotes │ {sum} comments
├─ 🔵 X: {n} posts │ {sum} likes │ {sum} reposts
├─ 🌐 Web: {n} pages │ {domains}
└─ Top voices: r/{sub1}, r/{sub2} │ @{handle1}, @{handle2} │ {web_author} on {site}
For web-only mode (no API keys):
---
✅ Research complete!
├─ 🌐 Web: {n} pages │ {domains}
└─ Top sources: {author1} on {site1}, {author2} on {site2}
💡 Want engagement metrics? Add API keys to ~/.config/last30days/.env
- OPENAI_API_KEY → Reddit (real upvotes & comments)
- XAI_API_KEY → X/Twitter (real likes & reposts)
LAST - Invitation:
---
Share your vision for what you want to create and I'll write a thoughtful prompt you can copy-paste directly into {TARGET_TOOL}.
Use real numbers from the research output. The patterns should be actual insights from the research, not generic advice.
SELF-CHECK before displaying : Re-read your "What I learned" section. Does it match what the research ACTUALLY says? If the research was about ClawdBot (a self-hosted AI agent), your summary should be about ClawdBot, not Claude Code. If you catch yourself projecting your own knowledge instead of the research, rewrite it.
IF TARGET_TOOL is still unknown after showing results , ask NOW (not before research):
What tool will you use these prompts with?
Options:
1. [Most relevant tool based on research - e.g., if research mentioned Figma/Sketch, offer those]
2. Nano Banana Pro (image generation)
3. ChatGPT / Claude (text/code)
4. Other (tell me)
IMPORTANT : After displaying this, WAIT for the user to respond. Don't dump generic prompts.
After showing the stats summary with your invitation, STOP and wait for the user to tell you what they want to create.
When they respond with their vision (e.g., "I want a landing page mockup for my SaaS app"), THEN write a single, thoughtful, tailored prompt.
Based on what they want to create, write a single, highly-tailored prompt using your research expertise.
If research says to use a specific prompt FORMAT, YOU MUST USE THAT FORMAT:
ANTI-PATTERN : Research says "use JSON prompts with device specs" but you write plain prose. This defeats the entire purpose of the research.
Here's your prompt for {TARGET_TOOL}:
---
[The actual prompt IN THE FORMAT THE RESEARCH RECOMMENDS - if research said JSON, this is JSON. If research said natural language, this is prose. Match what works.]
---
This uses [brief 1-line explanation of what research insight you applied].
Only if they ask for alternatives or more prompts, provide 2-3 variations. Don't dump a prompt pack unless requested.
After delivering a prompt, offer to write more:
Want another prompt? Just tell me what you're creating next.
For the rest of this conversation, remember:
CRITICAL: After research is complete, you are now an EXPERT on this topic.
When the user asks follow-up questions:
Only do new research if the user explicitly asks about a DIFFERENT topic.
After delivering a prompt, end with:
For full/partial mode :
---
📚 Expert in: {TOPIC} for {TARGET_TOOL}
📊 Based on: {n} Reddit threads ({sum} upvotes) + {n} X posts ({sum} likes) + {n} web pages
Want another prompt? Just tell me what you're creating next.
For web-only mode :
---
📚 Expert in: {TOPIC} for {TARGET_TOOL}
📊 Based on: {n} web pages from {domains}
Want another prompt? Just tell me what you're creating next.
💡 Unlock Reddit & X data: Add API keys to ~/.config/last30days/.env
This skill is applicable to execute the workflow or actions described in the overview.
Weekly Installs
471
Repository
GitHub Stars
27.6K
First Seen
Jan 26, 2026
Security Audits
Gen Agent Trust HubPassSocketWarnSnykWarn
Installed on
opencode418
gemini-cli403
codex388
github-copilot361
claude-code358
cursor357
AI Elements:基于shadcn/ui的AI原生应用组件库,快速构建对话界面
56,200 周安装
竞争对手研究指南:SEO、内容、反向链接与定价分析工具
231 周安装
Azure 工作负载自动升级评估工具 - 支持 Functions、App Service 计划与 SKU 迁移
231 周安装
Kaizen持续改进方法论:软件开发中的渐进式优化与防错设计实践指南
231 周安装
软件UI/UX设计指南:以用户为中心的设计原则、WCAG可访问性与平台规范
231 周安装
Apify 网络爬虫和自动化平台 - 无需编码抓取亚马逊、谷歌、领英等网站数据
231 周安装
llama.cpp 中文指南:纯 C/C++ LLM 推理,CPU/非 NVIDIA 硬件优化部署
231 周安装