重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
highlight-graph by readwiseio/readwise-skills
npx skills add https://github.com/readwiseio/readwise-skills --skill highlight-graph您正在构建一个用户 Readwise 高亮的交互式 2D 力导向图可视化,展示想法如何在书籍、文章和其他来源之间建立联系。想象一下 Obsidian 的图谱视图,但专门用于高亮。
检查 Readwise MCP 工具是否可用(例如 mcp__readwise__readwise_list_highlights)。如果可用,请全程使用它们。如果不可用,则使用等效的 readwise CLI 命令。
以如下内容开始:
高亮图谱 · Readwise
我将拉取您最近的高亮,找出它们之间的联系,并构建一个您可以探索的图谱。请稍等片刻。
使用 readwise_list_highlights 并设置 page_size=100 来获取用户最近的高亮。获取 2 页(200 条高亮)以获得一个良好的起始图谱。每页返回的高亮按从最近到最不近的顺序排列——使用 page=1,然后 page=2。
解析 API 响应并构建一个高亮的 JSON 数组。每条高亮需要:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
{
"id": "12345",
"text": "实际的高亮文本...",
"note": "用户的笔记(如果有)",
"book_id": 58741401,
"source_title": "目标",
"source_author": "Eliyahu Goldratt",
"url": "https://..."
}
重要提示: Readwise API 返回 book_id,但不会随每条高亮返回书籍/文章标题或作者。您必须通过阅读高亮文本和任何可用的元数据(URL、内容模式)自行识别来源标题和作者。按 book_id 对高亮进行分组,并根据上下文推断来源。不确定时使用“未知”作为作者是可以的,但请尽量识别标题。
将此数组写入一个临时文件:/tmp/highlights.json
写入一个空的连接文件并运行构建脚本,以便立即向用户展示一些内容:
echo '[]' > /tmp/connections.json
python3 SKILL_DIR/build_graph.py --highlights /tmp/highlights.json --connections /tmp/connections.json --output highlight-graph.html
open highlight-graph.html
告诉用户:
图谱已打开,包含 {N} 条高亮,来自 {N} 个来源。现在正在寻找想法之间的联系...
启动并行子代理(3-5 个代理)来查找来自不同来源的高亮之间的语义联系。每个代理应分析一批高亮并返回连接。
分批策略:
每个代理应返回一个连接的 JSON 数组:
[
{
"a_id": "12345",
"b_id": "67890",
"label": "反馈循环",
"why": "两条高亮都讨论了紧密的反馈循环如何提高质量"
}
]
质量优于数量。 仅当链接真实且有趣时才创建连接。对于 200 条高亮,总共 15-30 个跨来源连接是理想的。
将所有代理结果合并成一个连接 JSON 数组,写入 /tmp/connections.json,并重新运行构建脚本:
python3 SKILL_DIR/build_graph.py --highlights /tmp/highlights.json --connections /tmp/connections.json --output highlight-graph.html
open highlight-graph.html
呈现摘要:
已构建一个包含 {N} 条高亮、来自 {N} 个来源 的图谱,其中想法之间有 {N} 个连接。
我发现的一些有趣连接:
- “{高亮 A 片段}” ↔ “{高亮 B 片段}” — {连接标签}
- ...
图谱已在您的浏览器中打开。想要添加更多高亮吗?
page=3、page=4 等),重新运行来源识别,查找新连接,重新构建。readwise_search_highlights 拉取特定主题或特定书籍的高亮,仅使用这些高亮重新构建。build_graph.py(位于此技能的目录中)处理所有可视化逻辑。它接受两个 JSON 文件并输出一个自包含的 HTML 文件:
python3 build_graph.py --highlights highlights.json --connections connections.json --output output.html
highlights.json: {id, text, note, book_id, source_title, source_author, url} 的数组
connections.json: {a_id, b_id, label, why} 的数组
该脚本处理:
输出是一个使用来自 CDN 的 force-graph 的单个 HTML 文件。无需服务器——只需在浏览器中打开即可。
将上述命令中的 SKILL_DIR 替换为此技能目录(build_graph.py 所在位置)的实际路径。
每周安装数
72
仓库
GitHub 星标数
98
首次出现
8 天前
安全审计
安装于
codex72
gemini-cli72
kimi-cli72
amp72
cline72
github-copilot72
You are building an interactive 2D force-graph visualization of the user's Readwise highlights, showing how ideas connect across books, articles, and other sources. Think Obsidian's graph view, but for highlights.
Check if Readwise MCP tools are available (e.g. mcp__readwise__readwise_list_highlights). If they are, use them throughout. If not, use the equivalent readwise CLI commands instead.
Open with:
Highlight Graph · Readwise
I'll pull your recent highlights, find connections between them, and build a graph you can explore. Give me a moment.
Fetch the user's most recent highlights using readwise_list_highlights with page_size=100. Fetch 2 pages (200 highlights) for a good starting graph. Each page returns highlights from most recent to least recent — use page=1, then page=2.
Parse the API responses and build a JSON array of highlights. Each highlight needs:
{
"id": "12345",
"text": "The actual highlight text...",
"note": "User's note if any",
"book_id": 58741401,
"source_title": "The Goal",
"source_author": "Eliyahu Goldratt",
"url": "https://..."
}
Important: The Readwise API returns book_id but does NOT return the book/article title or author with each highlight. You must identify the source title and author yourself by reading the highlight texts and any available metadata (URLs, content patterns). Group highlights by book_id and infer the source from context. It's fine to use "Unknown" for author when unsure, but try to identify the title.
Write this array to a temp file: /tmp/highlights.json
Write an empty connections file and run the build script to give the user something to look at immediately:
echo '[]' > /tmp/connections.json
python3 SKILL_DIR/build_graph.py --highlights /tmp/highlights.json --connections /tmp/connections.json --output highlight-graph.html
open highlight-graph.html
Tell the user:
Graph is open with {N} highlights across {N} sources. Finding connections between ideas now...
Launch parallel subagents (3-5 agents) to find semantic connections between highlights from different sources. Each agent should analyze a batch of highlights and return connections.
Batching strategy:
Each agent should return a JSON array of connections:
[
{
"a_id": "12345",
"b_id": "67890",
"label": "Feedback loops",
"why": "Both highlights discuss how tight feedback loops improve quality"
}
]
Quality over quantity. Only create connections when the link is real and would be interesting. 15-30 total cross-source connections for 200 highlights is ideal.
Merge all agent results into a single connections JSON array, write to /tmp/connections.json, and re-run the build script:
python3 SKILL_DIR/build_graph.py --highlights /tmp/highlights.json --connections /tmp/connections.json --output highlight-graph.html
open highlight-graph.html
Present a summary:
Built a graph of {N} highlights across {N} sources , with {N} connections between ideas.
A few interesting connections I found:
- "{highlight A snippet}" ↔ "{highlight B snippet}" — {connection label}
- ...
The graph is open in your browser. Want to add more highlights?
page=3, page=4, etc.), re-run source identification, find new connections, rebuild.readwise_search_highlights to pull highlights on a specific topic or from a specific book, rebuild with just those.build_graph.py (in this skill's directory) handles all the visualization logic. It takes two JSON files and outputs a self-contained HTML file:
python3 build_graph.py --highlights highlights.json --connections connections.json --output output.html
highlights.json: Array of {id, text, note, book_id, source_title, source_author, url} connections.json: Array of {a_id, b_id, label, why}
The script handles:
The output is a single HTML file using force-graph from CDN. No server needed — just open in a browser.
Replace SKILL_DIR in commands above with the actual path to this skill's directory (where build_graph.py lives).
Weekly Installs
72
Repository
GitHub Stars
98
First Seen
8 days ago
Security Audits
Gen Agent Trust HubWarnSocketWarnSnykWarn
Installed on
codex72
gemini-cli72
kimi-cli72
amp72
cline72
github-copilot72
GitHub Actions 官方文档查询助手 - 精准解答 CI/CD 工作流问题
47,200 周安装