npx skills add https://github.com/tristanmanchester/agent-skills --skill reddit-readonlyClawdbot 的只读 Reddit 浏览技能。
所有命令都将 JSON 打印到标准输出。
{ "ok": true, "data": ... }{ "ok": false, "error": { "message": "...", "details": "..." } }node {baseDir}/scripts/reddit-readonly.mjs posts <subreddit> \
--sort hot|new|top|controversial|rising \
--time day|week|month|year|all \
--limit 10 \
--after <token>
Read-only Reddit browsing for Clawdbot.
All commands print JSON to stdout.
{ "ok": true, "data": ... }{ "ok": false, "error": { "message": "...", "details": "..." } }node {baseDir}/scripts/reddit-readonly.mjs posts <subreddit> \
--sort hot|new|top|controversial|rising \
--time day|week|month|year|all \
--limit 10 \
--after <token>
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
# 在子版块内搜索
node {baseDir}/scripts/reddit-readonly.mjs search <subreddit> "<query>" --limit 10
# 搜索整个 Reddit
node {baseDir}/scripts/reddit-readonly.mjs search all "<query>" --limit 10
# 通过帖子 ID 或 URL
node {baseDir}/scripts/reddit-readonly.mjs comments <post_id|url> --limit 50 --depth 6
node {baseDir}/scripts/reddit-readonly.mjs recent-comments <subreddit> --limit 25
node {baseDir}/scripts/reddit-readonly.mjs thread <post_id|url> --commentLimit 50 --depth 6
当用户描述类似以下标准时使用此命令:"在 r/a、r/b 和 r/c 中查找过去 48 小时内发布的关于 X 的帖子,排除 Y"。
node {baseDir}/scripts/reddit-readonly.mjs find \
--subreddits "python,learnpython" \
--query "fastapi deployment" \
--include "docker,uvicorn,nginx" \
--exclude "homework,beginner" \
--minScore 2 \
--maxAgeHours 48 \
--perSubredditLimit 25 \
--maxResults 10 \
--rank new
find(或 posts/search)。thread 获取上下文。如果 Reddit 返回 HTML,请重新运行命令(脚本会检测到这一点并返回错误)。
如果请求反复失败,请减少 --limit 和/或通过环境变量设置较慢的节奏:
export REDDIT_RO_MIN_DELAY_MS=800 export REDDIT_RO_MAX_DELAY_MS=1800 export REDDIT_RO_TIMEOUT_MS=25000 export REDDIT_RO_USER_AGENT='script:clawdbot-reddit-readonly:v1.0.0 (personal)'
每周安装次数
247
仓库
首次出现
2026年2月1日
安全审计
安装于
codex235
opencode234
gemini-cli233
github-copilot229
cursor227
kimi-cli226
# Search within a subreddit
node {baseDir}/scripts/reddit-readonly.mjs search <subreddit> "<query>" --limit 10
# Search all of Reddit
node {baseDir}/scripts/reddit-readonly.mjs search all "<query>" --limit 10
# By post id or URL
node {baseDir}/scripts/reddit-readonly.mjs comments <post_id|url> --limit 50 --depth 6
node {baseDir}/scripts/reddit-readonly.mjs recent-comments <subreddit> --limit 25
node {baseDir}/scripts/reddit-readonly.mjs thread <post_id|url> --commentLimit 50 --depth 6
Use this when the user describes criteria like: "Find posts about X in r/a, r/b, and r/c posted in the last 48 hours, excluding Y".
node {baseDir}/scripts/reddit-readonly.mjs find \
--subreddits "python,learnpython" \
--query "fastapi deployment" \
--include "docker,uvicorn,nginx" \
--exclude "homework,beginner" \
--minScore 2 \
--maxAgeHours 48 \
--perSubredditLimit 25 \
--maxResults 10 \
--rank new
find (or posts/search) using small limits.thread.If Reddit returns HTML, re-run the command (the script detects this and returns an error).
If requests fail repeatedly, reduce --limit and/or set slower pacing via env vars:
export REDDIT_RO_MIN_DELAY_MS=800 export REDDIT_RO_MAX_DELAY_MS=1800 export REDDIT_RO_TIMEOUT_MS=25000 export REDDIT_RO_USER_AGENT='script:clawdbot-reddit-readonly:v1.0.0 (personal)'
Weekly Installs
247
Repository
First Seen
Feb 1, 2026
Security Audits
Installed on
codex235
opencode234
gemini-cli233
github-copilot229
cursor227
kimi-cli226
Python PDF处理教程:合并拆分、提取文本表格、创建PDF文件
55,400 周安装