social-media-trends-research by drshailesh88/integrated_content_os
npx skills add https://github.com/drshailesh88/integrated_content_os --skill social-media-trends-research使用三种免费工具进行程序化趋势研究:
此技能提供用于趋势研究的可执行代码。可与 content-marketing-social-listening 结合用于策略制定,与 perplexity-search 结合用于深度查询。
# 安装依赖项(一次性)
pip install pytrends requests --break-system-packages
无需 API 密钥。Reddit 抓取使用公共的 .json 端点。
from pytrends.request import TrendReq
import time
# 初始化(无需 API 密钥)
pytrends = TrendReq(hl='en-US', tz=330) # tz=330 对应印度 (IST)
# 获取实时热门搜索
trending = pytrends.trending_searches(pn='india')
print(trending.head(20))
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
from pytrends.request import TrendReq
import time
pytrends = TrendReq(hl='en-US', tz=330)
# 定义您的利基关键词(每次请求最多 5 个)
keywords = ['heart health', 'cardiology', 'cholesterol']
# 构建负载
pytrends.build_payload(keywords, timeframe='now 7-d', geo='IN')
# 获取随时间变化的兴趣度
interest = pytrends.interest_over_time()
print(interest)
# 关键:请求之间等待以避免速率限制
time.sleep(3)
# 获取相关查询(这是黄金数据 - 显示上升主题)
related = pytrends.related_queries()
for kw in keywords:
print(f"\n=== '{kw}' 的上升查询 ===")
rising = related[kw]['rising']
if rising is not None:
print(rising.head(10))
from pytrends.request import TrendReq
import time
pytrends = TrendReq(hl='en-US', tz=330)
def find_breakout_topics(keyword, geo=''):
"""查找具有爆炸性增长的主题(潜在的病毒式内容)"""
pytrends.build_payload([keyword], timeframe='today 3-m', geo=geo)
time.sleep(3) # 速率限制
related = pytrends.related_queries()
rising = related[keyword]['rising']
if rising is not None:
# 筛选爆发性主题(标记为 "Breakout" 或百分比非常高)
breakouts = rising[rising['value'] >= 1000] # 1000%+ 增长
return breakouts
return None
# 示例用法
breakouts = find_breakout_topics('heart health', geo='IN')
print(breakouts)
import time
# 安全:日常使用每 3-5 秒 1 次请求
time.sleep(5)
# 批量研究:每 60 秒 1 次请求
time.sleep(60)
# 如果遇到速率限制(429 错误):等待 60-120 秒,然后继续
# 如果持续出现问题:等待 4-6 小时再恢复
| 时间范围 | 用例 |
|---|---|
'now 1-H' | 过去 1 小时(实时峰值) |
'now 4-H' | 过去 4 小时 |
'now 1-d' | 过去 24 小时 |
'now 7-d' | 过去 7 天(最适合趋势分析) |
'today 1-m' | 过去 30 天 |
'today 3-m' | 过去 90 天(速度分析) |
'today 12-m' | 过去一年(季节性模式) |
import requests
import time
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'}
# 在 Reddit 上搜索您的利基领域
url = "https://www.reddit.com/search.json?q=heart+health&limit=10&sort=relevance&t=week"
response = requests.get(url, headers=headers, timeout=10)
data = response.json()
# 显示结果
for child in data.get('data', {}).get('children', []):
post = child.get('data', {})
print(f"标题: {post.get('title')}")
print(f"子版块: r/{post.get('subreddit')}")
print(f"分数: {post.get('score')}")
print("---")
import requests
import time
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'}
# 定义与您利基相关的子版块
subreddits = ['cardiology', 'health', 'medicine']
for sub in subreddits:
print(f"\n=== r/{sub} 的热门内容 ===")
try:
url = f"https://www.reddit.com/r/{sub}/hot.json?limit=10"
response = requests.get(url, headers=headers, timeout=10)
data = response.json()
for child in data.get('data', {}).get('children', [])[:5]:
post = child.get('data', {})
print(f"- [{post.get('score')}] {post.get('title')[:60]}...")
except Exception as e:
print(f"错误: {e}")
time.sleep(3) # 请求之间的速率限制
scripts/reddit_scraper.py 中包含一个辅助类:
from scripts.reddit_scraper import SimpleRedditScraper
scraper = SimpleRedditScraper()
# 搜索
results = scraper.search("heart health tips", limit=20)
for post in results['posts']:
print(f"[{post['score']}] r/{post['subreddit']}: {post['title']}")
# 获取子版块热门帖子
results = scraper.get_subreddit("health", sort="hot", limit=10)
for post in results['posts']:
print(f"[{post['score']}] {post['title']}")
import time
# 安全:每 2-3 秒 1 次请求
time.sleep(3)
# 如果遇到 429 错误:等待 5-10 分钟
# 每小时请求不要超过 60 次
使用 Claude 内置的 Perplexity MCP 来处理无法直接抓取的平台。
Twitter/X 趋势:
"What are the most discussed [YOUR NICHE] topics on Twitter/X this week?
Include specific examples of viral tweets and their engagement."
TikTok 趋势(在印度可用):
"What [YOUR NICHE] content is trending on TikTok right now?
Include hashtags, view counts, and content formats that are working."
YouTube 趋势:
"What [YOUR NICHE] videos are getting the most views on YouTube this week?
Include channel names, view counts, and video topics."
LinkedIn 专业趋势:
"What [YOUR NICHE] topics are professionals discussing on LinkedIn this week?
Include examples of high-engagement posts."
通用病毒式内容:
"What [YOUR NICHE] content has gone viral across social media in the past 7 days?
Include platform, format, and why it resonated."
如果您安装了 perplexity-search 技能:
python scripts/perplexity_search.py \
"What cardiology topics are trending on Twitter and TikTok this week? Include specific viral posts and hashtags." \
--model sonar-pro
from pytrends.request import TrendReq
import requests
import time
import json
from datetime import datetime
class TrendResearcher:
def __init__(self):
self.pytrends = TrendReq(hl='en-US', tz=330)
self.reddit_headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'
}
def _reddit_request(self, url):
"""发起 Reddit API 请求。"""
try:
response = requests.get(url, headers=self.reddit_headers, timeout=10)
response.raise_for_status()
return response.json()
except Exception as e:
return {'error': str(e)}
def research_niche(self, keywords, subreddits=None, geo='IN'):
"""
针对一个利基领域进行完整的趋势研究。
参数:
keywords: 关键词列表(最多 5 个)
subreddits: 要监控的子版块名称列表
geo: 地理区域代码
返回:
包含所有研究数据的字典
"""
results = {
'timestamp': datetime.now().isoformat(),
'keywords': keywords,
'google_trends': {},
'reddit': {},
'recommendations': []
}
# 1. Google Trends - 随时间变化的兴趣度
print("📊 获取 Google Trends 数据...")
try:
self.pytrends.build_payload(keywords[:5], timeframe='now 7-d', geo=geo)
results['google_trends']['interest'] = self.pytrends.interest_over_time().to_dict()
time.sleep(5)
# 相关查询(上升主题)
related = self.pytrends.related_queries()
results['google_trends']['rising_queries'] = {}
for kw in keywords[:5]:
rising = related[kw]['rising']
if rising is not None:
results['google_trends']['rising_queries'][kw] = rising.head(10).to_dict()
time.sleep(5)
except Exception as e:
results['google_trends']['error'] = str(e)
# 2. Reddit 研究
print("👽 获取 Reddit 讨论...")
if subreddits:
for sub in subreddits[:5]:
try:
url = f"https://www.reddit.com/r/{sub}/hot.json?limit=10"
data = self._reddit_request(url)
posts = []
for child in data.get('data', {}).get('children', [])[:5]:
post = child.get('data', {})
posts.append({
'title': post.get('title', ''),
'score': post.get('score', 0),
'comments': post.get('num_comments', 0)
})
results['reddit'][sub] = posts
time.sleep(3)
except Exception as e:
results['reddit'][sub] = {'error': str(e)}
# 3. 在 Reddit 上搜索关键词
print("🔍 在 Reddit 上搜索关键词...")
for kw in keywords[:3]:
try:
url = f"https://www.reddit.com/search.json?q={kw}&limit=10&sort=relevance&t=week"
data = self._reddit_request(url)
posts = []
for child in data.get('data', {}).get('children', [])[:5]:
post = child.get('data', {})
posts.append({
'title': post.get('title', ''),
'subreddit': post.get('subreddit', ''),
'score': post.get('score', 0),
'comments': post.get('num_comments', 0)
})
results['reddit'][f'search_{kw}'] = posts
time.sleep(3)
except Exception as e:
results['reddit'][f'search_{kw}'] = {'error': str(e)}
# 4. 生成推荐
results['recommendations'] = self._generate_recommendations(results)
return results
def _generate_recommendations(self, data):
"""根据研究数据生成内容推荐"""
recommendations = []
# 来自上升查询
rising = data.get('google_trends', {}).get('rising_queries', {})
for kw, queries in rising.items():
if isinstance(queries, dict) and 'query' in queries:
for query in list(queries['query'].values())[:3]:
recommendations.append({
'source': 'Google Trends',
'topic': query,
'reason': f"与 '{kw}' 相关的上升搜索词"
})
# 来自 Reddit 热门帖子
for sub, posts in data.get('reddit', {}).items():
if isinstance(posts, list):
for post in posts[:2]:
if post.get('score', 0) > 50:
recommendations.append({
'source': f'Reddit r/{sub}',
'topic': post.get('title', ''),
'reason': f"高互动度 ({post.get('score')} 个赞)"
})
return recommendations
# 使用示例
if __name__ == "__main__":
researcher = TrendResearcher()
results = researcher.research_niche(
keywords=['heart health', 'cardiology', 'cholesterol'],
subreddits=['cardiology', 'health', 'medicine'],
geo='IN'
)
# 保存结果
with open('trend_research.json', 'w') as f:
json.dump(results, f, indent=2, default=str)
# 打印推荐
print("\n🎯 内容推荐:")
for rec in results['recommendations']:
print(f"- [{rec['source']}] {rec['topic']}")
print(f" 原因: {rec['reason']}")
from pytrends.request import TrendReq
import requests
import time
# 快速 Google Trends 检查
pytrends = TrendReq(hl='en-US', tz=330)
pytrends.build_payload(['your keyword'], timeframe='now 1-d')
print(pytrends.related_queries()['your keyword']['rising'])
time.sleep(5)
# 快速 Reddit 检查
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'}
url = "https://www.reddit.com/search.json?q=your+keyword&limit=10&t=day"
response = requests.get(url, headers=headers, timeout=10)
data = response.json()
for child in data.get('data', {}).get('children', [])[:5]:
post = child.get('data', {})
print(f"[{post.get('score')}] {post.get('title')}")
# 使用上面的 TrendResearcher 类,配合:
# - 5 个核心关键词
# - 5 个相关子版块
# - 90 天时间范围用于速度分析
# 然后使用 Perplexity MCP 进行:
# - 您利基领域的 Twitter 趋势
# - TikTok 病毒式内容
# - YouTube 热门视频
# - LinkedIn 讨论
研究完成后,将发现传递给您的写作技能:
1. 运行趋势研究(本技能)
2. 识别前 3-5 个机会
3. 使用 content-marketing-social-listening 制定策略
4. 使用 cardiology-content-repurposer 或类似工具进行内容创作
5. 使用 authentic-voice 进行最终润色
| 错误 | 解决方案 |
|---|---|
| 429 请求过多 | 等待 60 秒,然后增加休眠时间 |
| 结果为空 | 检查关键词是否有搜索量 |
| 连接错误 | 检查网络,5 分钟后重试 |
| 错误 | 解决方案 |
|---|---|
| 429 速率限制 | 等待 10 分钟 |
| 未找到子版块 | 检查子版块名称拼写 |
| 结果为空 | 子版块可能为私有或被隔离 |
| 连接超时 | 增加超时时间,检查网络 |
| 平台 | 工具 | 成本 | 风险 |
|---|---|---|---|
| Google Trends | pytrends | 免费 | 极低 |
| requests (公共 JSON) | 免费 | 低 | |
| Twitter/X | Perplexity MCP | 免费* | 无 |
| TikTok | Perplexity MCP | 免费* | 无 |
| YouTube | Perplexity MCP | 免费* | 无 |
| Perplexity MCP | 免费* | 无 |
*使用 Claude 内置的 MCP,或在使用 perplexity-search 技能时使用 OpenRouter 积分
scripts/trend_research.py: 用于完整趋势研究的主 CLI 工具scripts/reddit_scraper.py: 简单的 Reddit 抓取器类(无需 API 密钥)每周安装量
265
仓库
GitHub 星标数
2
首次出现
2026 年 1 月 24 日
安全审计
安装于
opencode242
gemini-cli234
codex230
cursor224
github-copilot217
kimi-cli204
Programmatic trend research using three free tools:
This skill provides executable code for trend research. Use alongside content-marketing-social-listening for strategy and perplexity-search for deep queries.
# Install dependencies (one-time)
pip install pytrends requests --break-system-packages
No API keys required. Reddit scraping uses public .json endpoints.
from pytrends.request import TrendReq
import time
# Initialize (no API key needed)
pytrends = TrendReq(hl='en-US', tz=330) # tz=330 for India (IST)
# Get real-time trending searches
trending = pytrends.trending_searches(pn='india')
print(trending.head(20))
from pytrends.request import TrendReq
import time
pytrends = TrendReq(hl='en-US', tz=330)
# Define your niche keywords (max 5 per request)
keywords = ['heart health', 'cardiology', 'cholesterol']
# Build payload
pytrends.build_payload(keywords, timeframe='now 7-d', geo='IN')
# Get interest over time
interest = pytrends.interest_over_time()
print(interest)
# CRITICAL: Wait between requests to avoid rate limiting
time.sleep(3)
# Get related queries (THIS IS GOLD - shows rising topics)
related = pytrends.related_queries()
for kw in keywords:
print(f"\n=== Rising queries for '{kw}' ===")
rising = related[kw]['rising']
if rising is not None:
print(rising.head(10))
from pytrends.request import TrendReq
import time
pytrends = TrendReq(hl='en-US', tz=330)
def find_breakout_topics(keyword, geo=''):
"""Find topics with explosive growth (potential viral content)"""
pytrends.build_payload([keyword], timeframe='today 3-m', geo=geo)
time.sleep(3) # Rate limiting
related = pytrends.related_queries()
rising = related[keyword]['rising']
if rising is not None:
# Filter for breakout topics (marked as "Breakout" or very high %)
breakouts = rising[rising['value'] >= 1000] # 1000%+ growth
return breakouts
return None
# Example usage
breakouts = find_breakout_topics('heart health', geo='IN')
print(breakouts)
import time
# SAFE: 1 request per 3-5 seconds for casual use
time.sleep(5)
# BULK RESEARCH: 1 request per 60 seconds
time.sleep(60)
# If you get rate limited (429 error): Wait 60-120 seconds, then continue
# If persistent issues: Wait 4-6 hours before resuming
| Timeframe | Use Case |
|---|---|
'now 1-H' | Last hour (real-time spikes) |
'now 4-H' | Last 4 hours |
'now 1-d' | Last 24 hours |
'now 7-d' | Last 7 days (best for trends) |
'today 1-m' | Last 30 days |
'today 3-m' | Last 90 days (velocity analysis) |
import requests
import time
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'}
# Search Reddit for your niche
url = "https://www.reddit.com/search.json?q=heart+health&limit=10&sort=relevance&t=week"
response = requests.get(url, headers=headers, timeout=10)
data = response.json()
# Display results
for child in data.get('data', {}).get('children', []):
post = child.get('data', {})
print(f"Title: {post.get('title')}")
print(f"Subreddit: r/{post.get('subreddit')}")
print(f"Score: {post.get('score')}")
print("---")
import requests
import time
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'}
# Define subreddits relevant to your niche
subreddits = ['cardiology', 'health', 'medicine']
for sub in subreddits:
print(f"\n=== Hot in r/{sub} ===")
try:
url = f"https://www.reddit.com/r/{sub}/hot.json?limit=10"
response = requests.get(url, headers=headers, timeout=10)
data = response.json()
for child in data.get('data', {}).get('children', [])[:5]:
post = child.get('data', {})
print(f"- [{post.get('score')}] {post.get('title')[:60]}...")
except Exception as e:
print(f"Error: {e}")
time.sleep(3) # Rate limiting between requests
A helper class is included in scripts/reddit_scraper.py:
from scripts.reddit_scraper import SimpleRedditScraper
scraper = SimpleRedditScraper()
# Search
results = scraper.search("heart health tips", limit=20)
for post in results['posts']:
print(f"[{post['score']}] r/{post['subreddit']}: {post['title']}")
# Get subreddit hot posts
results = scraper.get_subreddit("health", sort="hot", limit=10)
for post in results['posts']:
print(f"[{post['score']}] {post['title']}")
import time
# SAFE: 1 request per 2-3 seconds
time.sleep(3)
# If you get 429 errors: Wait 5-10 minutes
# Never do more than 60 requests per hour
Use Claude's built-in Perplexity MCP for platforms you can't scrape directly.
Twitter/X Trends:
"What are the most discussed [YOUR NICHE] topics on Twitter/X this week?
Include specific examples of viral tweets and their engagement."
TikTok Trends (works from India):
"What [YOUR NICHE] content is trending on TikTok right now?
Include hashtags, view counts, and content formats that are working."
YouTube Trends:
"What [YOUR NICHE] videos are getting the most views on YouTube this week?
Include channel names, view counts, and video topics."
LinkedIn Professional:
"What [YOUR NICHE] topics are professionals discussing on LinkedIn this week?
Include examples of high-engagement posts."
General Viral Content:
"What [YOUR NICHE] content has gone viral across social media in the past 7 days?
Include platform, format, and why it resonated."
If you have the perplexity-search skill installed:
python scripts/perplexity_search.py \
"What cardiology topics are trending on Twitter and TikTok this week? Include specific viral posts and hashtags." \
--model sonar-pro
from pytrends.request import TrendReq
import requests
import time
import json
from datetime import datetime
class TrendResearcher:
def __init__(self):
self.pytrends = TrendReq(hl='en-US', tz=330)
self.reddit_headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'
}
def _reddit_request(self, url):
"""Make a Reddit API request."""
try:
response = requests.get(url, headers=self.reddit_headers, timeout=10)
response.raise_for_status()
return response.json()
except Exception as e:
return {'error': str(e)}
def research_niche(self, keywords, subreddits=None, geo='IN'):
"""
Complete trend research for a niche.
Args:
keywords: List of keywords (max 5)
subreddits: List of subreddit names to monitor
geo: Geographic region code
Returns:
Dictionary with all research data
"""
results = {
'timestamp': datetime.now().isoformat(),
'keywords': keywords,
'google_trends': {},
'reddit': {},
'recommendations': []
}
# 1. Google Trends - Interest Over Time
print("📊 Fetching Google Trends data...")
try:
self.pytrends.build_payload(keywords[:5], timeframe='now 7-d', geo=geo)
results['google_trends']['interest'] = self.pytrends.interest_over_time().to_dict()
time.sleep(5)
# Related queries (rising topics)
related = self.pytrends.related_queries()
results['google_trends']['rising_queries'] = {}
for kw in keywords[:5]:
rising = related[kw]['rising']
if rising is not None:
results['google_trends']['rising_queries'][kw] = rising.head(10).to_dict()
time.sleep(5)
except Exception as e:
results['google_trends']['error'] = str(e)
# 2. Reddit Research
print("👽 Fetching Reddit discussions...")
if subreddits:
for sub in subreddits[:5]:
try:
url = f"https://www.reddit.com/r/{sub}/hot.json?limit=10"
data = self._reddit_request(url)
posts = []
for child in data.get('data', {}).get('children', [])[:5]:
post = child.get('data', {})
posts.append({
'title': post.get('title', ''),
'score': post.get('score', 0),
'comments': post.get('num_comments', 0)
})
results['reddit'][sub] = posts
time.sleep(3)
except Exception as e:
results['reddit'][sub] = {'error': str(e)}
# 3. Keyword search on Reddit
print("🔍 Searching Reddit for keywords...")
for kw in keywords[:3]:
try:
url = f"https://www.reddit.com/search.json?q={kw}&limit=10&sort=relevance&t=week"
data = self._reddit_request(url)
posts = []
for child in data.get('data', {}).get('children', [])[:5]:
post = child.get('data', {})
posts.append({
'title': post.get('title', ''),
'subreddit': post.get('subreddit', ''),
'score': post.get('score', 0),
'comments': post.get('num_comments', 0)
})
results['reddit'][f'search_{kw}'] = posts
time.sleep(3)
except Exception as e:
results['reddit'][f'search_{kw}'] = {'error': str(e)}
# 4. Generate recommendations
results['recommendations'] = self._generate_recommendations(results)
return results
def _generate_recommendations(self, data):
"""Generate content recommendations from research data"""
recommendations = []
# From rising queries
rising = data.get('google_trends', {}).get('rising_queries', {})
for kw, queries in rising.items():
if isinstance(queries, dict) and 'query' in queries:
for query in list(queries['query'].values())[:3]:
recommendations.append({
'source': 'Google Trends',
'topic': query,
'reason': f"Rising search term related to '{kw}'"
})
# From Reddit hot posts
for sub, posts in data.get('reddit', {}).items():
if isinstance(posts, list):
for post in posts[:2]:
if post.get('score', 0) > 50:
recommendations.append({
'source': f'Reddit r/{sub}',
'topic': post.get('title', ''),
'reason': f"High engagement ({post.get('score')} upvotes)"
})
return recommendations
# Usage Example
if __name__ == "__main__":
researcher = TrendResearcher()
results = researcher.research_niche(
keywords=['heart health', 'cardiology', 'cholesterol'],
subreddits=['cardiology', 'health', 'medicine'],
geo='IN'
)
# Save results
with open('trend_research.json', 'w') as f:
json.dump(results, f, indent=2, default=str)
# Print recommendations
print("\n🎯 CONTENT RECOMMENDATIONS:")
for rec in results['recommendations']:
print(f"- [{rec['source']}] {rec['topic']}")
print(f" Why: {rec['reason']}")
from pytrends.request import TrendReq
import requests
import time
# Quick Google Trends check
pytrends = TrendReq(hl='en-US', tz=330)
pytrends.build_payload(['your keyword'], timeframe='now 1-d')
print(pytrends.related_queries()['your keyword']['rising'])
time.sleep(5)
# Quick Reddit check
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'}
url = "https://www.reddit.com/search.json?q=your+keyword&limit=10&t=day"
response = requests.get(url, headers=headers, timeout=10)
data = response.json()
for child in data.get('data', {}).get('children', [])[:5]:
post = child.get('data', {})
print(f"[{post.get('score')}] {post.get('title')}")
# Use the TrendResearcher class above with:
# - 5 core keywords
# - 5 relevant subreddits
# - 90-day timeframe for velocity analysis
# Then use Perplexity MCP for:
# - Twitter trends in your niche
# - TikTok viral content
# - YouTube trending videos
# - LinkedIn discussions
After research, pass findings to your writing skills:
1. Run trend research (this skill)
2. Identify top 3-5 opportunities
3. Use content-marketing-social-listening for strategy
4. Use cardiology-content-repurposer or similar for content creation
5. Use authentic-voice for final polish
| Error | Solution |
|---|---|
| 429 Too Many Requests | Wait 60 seconds, then increase sleep time |
| Empty results | Check if keyword has search volume |
| Connection error | Check internet, retry in 5 minutes |
| Error | Solution |
|---|---|
| 429 Rate Limited | Wait 10 minutes |
| Subreddit not found | Check subreddit name spelling |
| Empty results | Subreddit may be private or quarantined |
| Connection timeout | Increase timeout, check internet |
| Platform | Tool | Cost | Risk |
|---|---|---|---|
| Google Trends | pytrends | Free | Very Low |
| requests (public JSON) | Free | Low | |
| Twitter/X | Perplexity MCP | Free* | None |
| TikTok | Perplexity MCP | Free* | None |
| YouTube | Perplexity MCP | Free* | None |
| Perplexity MCP | Free* | None |
*Uses Claude's built-in MCP or OpenRouter credits if using perplexity-search skill
scripts/trend_research.py: Main CLI tool for complete trend researchscripts/reddit_scraper.py: Simple Reddit scraper class (no API keys)Weekly Installs
265
Repository
GitHub Stars
2
First Seen
Jan 24, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
opencode242
gemini-cli234
codex230
cursor224
github-copilot217
kimi-cli204
Python PDF处理教程:合并拆分、提取文本表格、创建PDF文件
55,400 周安装
React Three Fiber 后期处理教程:Bloom辉光、Vignette暗角等效果实现
263 周安装
React-PDF 使用指南:在 React 中生成 PDF 的完整教程与最佳实践
263 周安装
Legal Advisor - Claude AI 法律顾问技能,智能法律咨询与文档分析助手
263 周安装
飞书CLI工具箱:13个功能模块命令速查,高效管理电子表格、日历、任务、文件等
263 周安装
PARA第二大脑技能:AI助手帮你用PARA方法组织知识库和数字笔记
263 周安装
GrepAI Ollama嵌入配置指南:本地私密代码搜索,免费离线AI模型
263 周安装
'today 12-m' |
| Last year (seasonal patterns) |