nlm-skill by jacob-bd/notebooklm-mcp-cli
npx skills add https://github.com/jacob-bd/notebooklm-mcp-cli --skill nlm-skill此技能提供了通过 nlm CLI 和 MCP 工具使用 NotebookLM 的全面指导。
在继续之前,请务必检查有哪些工具可用:
mcp__notebooklm-mcp__* 或 mcp_notebooklm_* 开头的工具nlm CLI 命令决策逻辑:
has_mcp_tools = check_available_tools() # 查找 mcp__notebooklm-mcp__* 或 mcp_notebooklm_*
has_cli = check_bash_available() # 可以运行 nlm 命令
if has_mcp_tools and has_cli:
# 询问用户:"我可以使用 MCP 工具或 nlm CLI。您更倾向于哪种方式?"
user_preference = ask_user()
else if has_mcp_tools:
# 直接使用 MCP 工具
mcp__notebooklm-mcp__notebook_list()
else:
# 通过 Bash 使用 CLI
bash("nlm notebook list")
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
此技能记录了两种方法。请根据工具可用性和用户偏好选择合适的方法。
运行 nlm --ai 以获取全面的 AI 优化文档 - 这提供了所有 CLI 功能的完整视图。
nlm --help # 列出所有命令
nlm <command> --help # 特定命令的帮助
nlm --ai # 完整的 AI 优化文档(推荐)
nlm --version # 检查已安装的版本
nlm loginnlm login--confirm 是必需的:所有生成和删除命令都需要 --confirm 或 -y(CLI)或 confirm=True(MCP)--notebook-id:该标志是强制性的,不是位置参数nlm alias set <name> <uuid> 简化长的 UUIDnlm alias list,以避免与现有名称冲突。nlm chat start - 它会打开一个 AI 工具无法控制的交互式 REPL。请改用 nlm notebook query 进行一次性问答。--quiet 来捕获用于管道传输的 ID。仅在需要以编程方式解析特定字段时才使用 --json。--help:运行 nlm <command> --help 查看任何命令的可用选项和标志。使用此树来确定正确的命令序列:
用户想要...
│
├─► 首次使用 NotebookLM
│ └─► nlm login → nlm notebook create "标题"
│
├─► 向笔记本添加内容
│ ├─► 从 URL/网页 → nlm source add <nb-id> --url "https://..."
│ ├─► 从 YouTube → nlm source add <nb-id> --url "https://youtube.com/..."
│ ├─► 从粘贴的文本 → nlm source add <nb-id> --text "内容" --title "标题"
│ ├─► 从 Google Drive → nlm source add <nb-id> --drive <doc-id> --type doc
│ └─► 发现新来源 → nlm research start "查询" --notebook-id <nb-id>
│
├─► 从来源生成内容
│ ├─► 播客/音频 → nlm audio create <nb-id> --confirm
│ ├─► 书面摘要 → nlm report create <nb-id> --confirm
│ ├─► 学习材料 → nlm quiz/flashcards create <nb-id> --confirm
│ ├─► 视觉内容 → nlm mindmap/slides/infographic create <nb-id> --confirm
│ ├─► 视频 → nlm video create <nb-id> --confirm
│ └─► 提取数据 → nlm data-table create <nb-id> "描述" --confirm
│
├─► 询问关于来源的问题
│ └─► nlm notebook query <nb-id> "问题"
│ (使用 --conversation-id 进行后续提问)
│ ⚠️ 不要使用 `nlm chat start` - 它是仅供人类使用的 REPL
│
├─► 检查生成状态
│ └─► nlm studio status <nb-id>
│
└─► 管理/清理
├─► 列出笔记本 → nlm notebook list
├─► 列出来源 → nlm source list <nb-id>
├─► 删除来源 → nlm source delete <source-id> --confirm
└─► 删除笔记本 → nlm notebook delete <nb-id> --confirm
如果使用 MCP 工具并遇到身份验证错误:
# 运行 CLI 身份验证(对 CLI 和 MCP 都有效)
nlm login
# 然后在 MCP 中重新加载令牌
mcp__notebooklm-mcp__refresh_auth()
或者通过 MCP 手动保存 cookie(备用方案):
# 从 Chrome DevTools 提取 cookie 并保存
mcp__notebooklm-mcp__save_auth_tokens(cookies="<cookie_header>")
#### CLI 身份验证
```bash
nlm login # 启动浏览器,提取 cookie(主要方法)
nlm login --check # 验证当前会话
nlm login --profile work # 为多个帐户使用命名配置文件
nlm login --provider openclaw --cdp-url http://127.0.0.1:18800 # 外部 CDP 提供程序
nlm login switch <profile> # 切换默认配置文件
nlm login profile list # 列出所有配置文件及其电子邮件地址
nlm login profile delete <name> # 删除配置文件
nlm login profile rename <old> <new> # 重命名配置文件
多配置文件支持:每个配置文件都有自己的独立浏览器会话(支持 Chrome、Arc、Brave、Edge、Chromium 等),因此您可以同时登录多个 Google 帐户。
会话生命周期:约 20 分钟。当命令因身份验证错误而失败时,请重新进行身份验证。
切换 MCP 帐户:MCP 服务器始终使用活动的默认配置文件。如果您需要切换 MCP 服务器正在通信的 Google 帐户,您必须使用 CLI:运行 nlm login switch <name>。您的下一次 MCP 工具调用将立即使用新帐户。
注意:MCP 和 CLI 共享相同的身份验证后端,因此使用其中一种进行身份验证对两者都有效。
使用工具:notebook_list, notebook_create, notebook_get, notebook_describe, notebook_query, notebook_rename, notebook_delete。所有工具都接受 notebook_id 参数。删除需要 confirm=True。
nlm notebook list # 列出所有笔记本
nlm notebook list --json # JSON 输出用于解析
nlm notebook list --quiet # 仅 ID(用于脚本)
nlm notebook create "标题" # 创建笔记本,返回 ID
nlm notebook get <id> # 获取笔记本详细信息
nlm notebook describe <id> # AI 生成的摘要 + 建议主题
nlm notebook query <id> "问题" # 与来源进行一次性问答
nlm notebook rename <id> "新标题" # 重命名笔记本
nlm notebook delete <id> --confirm # 永久删除
使用 source_add 并指定以下 source_type 值:
url - 网页或 YouTube URL(url 参数)text - 粘贴的内容(text + title 参数)file - 本地文件上传(file_path 参数)drive - Google Drive 文档(document_id + doc_type 参数)其他工具:source_list_drive, source_describe, source_get_content, source_rename, source_sync_drive(需要 confirm=True), source_delete(需要 confirm=True)。
# 添加来源
nlm source add <nb-id> --url "https://..." # 网页
nlm source add <nb-id> --url "https://youtube.com/..." # YouTube 视频
nlm source add <nb-id> --text "内容" --title "X" # 粘贴的文本
nlm source add <nb-id> --drive <doc-id> # Drive 文档(自动检测类型)
nlm source add <nb-id> --drive <doc-id> --type slides # 显式指定类型
# 列出和查看
nlm source list <nb-id> # 来源表格
nlm source list <nb-id> --drive # 显示 Drive 来源及其新鲜度
nlm source list <nb-id> --drive -S # 跳过新鲜度检查(更快)
nlm source get <source-id> # 来源元数据
nlm source describe <source-id> # AI 摘要 + 关键词
nlm source content <source-id> # 原始文本内容
nlm source content <source-id> -o file.txt # 导出到文件
# Drive 同步(用于过时的来源)
nlm source stale <nb-id> # 列出过时的 Drive 来源
nlm source sync <nb-id> --confirm # 同步所有过时的来源
nlm source sync <nb-id> --source-ids <ids> --confirm # 同步特定的来源
# 重命名
nlm source rename <source-id> "新标题" --notebook <nb-id>
nlm rename source <source-id> "新标题" --notebook <nb-id> # 动词在前
# 删除
nlm source delete <source-id> --confirm
Drive 类型:doc, slides, sheets, pdf
研究从网络或 Google Drive 查找新的来源。
使用 research_start 并指定:
source: web 或 drivemode: fast(约 30 秒)或 deep(约 5 分钟,仅限 web)工作流:research_start → 轮询 research_status → research_import
# 开始研究(--notebook-id 是必需的)
nlm research start "查询" --notebook-id <id> # 快速网络搜索(约 30 秒)
nlm research start "查询" --notebook-id <id> --mode deep # 深度网络搜索(约 5 分钟)
nlm research start "查询" --notebook-id <id> --source drive # Drive 搜索
# 检查进度
nlm research status <nb-id> # 轮询直到完成(最长 5 分钟)
nlm research status <nb-id> --max-wait 0 # 单次检查,不等待
nlm research status <nb-id> --task-id <tid> # 检查特定任务
nlm research status <nb-id> --full # 完整详细信息
# 导入发现的来源
nlm research import <nb-id> <task-id> # 导入全部
nlm research import <nb-id> <task-id> --indices 0,2,5 # 导入特定的
nlm research import <nb-id> <task-id> --timeout 600 # 自定义超时(默认:300 秒)
模式:fast(约 30 秒,约 10 个来源)| deep(约 5 分钟,约 40+ 个来源,仅限 web)
使用 studio_create 并指定 artifact_type 和特定于类型的选项。所有操作都需要 confirm=True。
| artifact_type | 关键选项 |
|---|---|
audio | audio_format: deep_dive/brief/critique/debate, audio_length: short/default/long |
video | video_format: explainer/brief, visual_style: auto_select/classic/whiteboard/kawaii/anime/watercolor/retro_print/heritage/paper_craft |
report | report_format: Briefing Doc/Study Guide/Blog Post/Create Your Own, custom_prompt |
quiz | question_count, difficulty: easy/medium/hard |
flashcards | difficulty: easy/medium/hard |
mind_map | title |
slide_deck | slide_format: detailed_deck/presenter_slides, slide_length: short/default |
infographic | orientation: landscape/portrait/square, detail_level: concise/standard/detailed, infographic_style: auto_select/sketch_note/professional/bento_grid/editorial/instructional/bricks/clay/anime/kawaii/scientific |
data_table | description(必需) |
通用选项:source_ids, language(BCP-47 代码), focus_prompt
修订幻灯片:使用 studio_revise 修订现有幻灯片演示文稿中的单个幻灯片。
artifact_id(来自 studio_status)和 slide_instructionsstudio_status 以检查新演示文稿何时准备就绪所有生成命令共享以下标志:
--confirm 或 -y:必需才能执行
--source-ids <id1,id2>:限制为特定来源
--language <code>:BCP-47 代码(en, es, fr, de, ja)
nlm audio create <id> --confirm nlm audio create <id> --format deep_dive --length default --confirm nlm audio create <id> --format brief --focus "关键主题" --confirm
nlm report create <id> --confirm nlm report create <id> --format "Study Guide" --confirm nlm report create <id> --format "Create Your Own" --prompt "自定义..." --confirm
nlm quiz create <id> --confirm nlm quiz create <id> --count 5 --difficulty 3 --confirm nlm quiz create <id> --count 10 --difficulty 3 --focus "专注于关键概念" --confirm
nlm flashcards create <id> --confirm nlm flashcards create <id> --difficulty hard --confirm nlm flashcards create <id> --difficulty medium --focus "专注于定义" --confirm
nlm mindmap create <id> --confirm nlm mindmap create <id> --title "主题概述" --confirm nlm mindmap list <id> # 列出现有的思维导图
nlm slides create <id> --confirm nlm slides create <id> --format presenter --length short --confirm
nlm slides revise <artifact-id> --slide '1 使标题更大' --confirm
nlm infographic create <id> --confirm nlm infographic create <id> --orientation portrait --detail detailed --style professional --confirm
nlm video create <id> --confirm nlm video create <id> --format brief --style whiteboard --confirm
nlm data-table create <id> "提取所有日期和事件" --confirm
使用 studio_status 检查进度(或使用 action="rename" 进行重命名)。使用 download_artifact 并指定 artifact_type 和 output_path。使用 export_artifact 并指定 export_type:docs/sheets。使用 studio_delete 进行删除(需要 confirm=True)。
# 检查状态
nlm studio status <nb-id> # 列出所有制品
nlm studio status <nb-id> --full # 显示完整详细信息(包括自定义提示)
nlm studio status <nb-id> --json # JSON 输出
# 下载制品
nlm download audio <nb-id> --output podcast.mp3
nlm download video <nb-id> --output video.mp4
nlm download report <nb-id> --output report.md
nlm download slide-deck <nb-id> --output slides.pdf # PDF(默认)
nlm download slide-deck <nb-id> --output slides.pptx --format pptx # PPTX
nlm download quiz <nb-id> --output quiz.json --format json
# 导出到 Google Docs/Sheets
nlm export sheets <nb-id> <artifact-id> --title "我的数据表"
nlm export docs <nb-id> <artifact-id> --title "我的报告"
# 删除制品
nlm studio delete <nb-id> <artifact-id> --confirm
状态值:completed(✓), in_progress(●), failed(✗)
提示提取:studio_status 工具为每个制品返回一个 custom_instructions 字段。其中包含用于生成该制品的原始焦点提示或自定义指令(例如,用于“Create Your Own”报告的提示,或用于音频概述的焦点主题)。这对于检索生成成功制品的确切提示非常有用。
MCP 工具: source_rename(notebook_id, source_id, new_title)
CLI:
nlm source rename <source-id> "新标题" --notebook <notebook-id>
nlm rename source <source-id> "新标题" --notebook <notebook-id> # 动词在前
使用 studio_status 并指定 action="rename", artifact_id 和 new_title。
nlm studio rename <artifact-id> "新标题"
nlm rename studio <artifact-id> "新标题" # 动词在前的替代方式
使用 server_info 获取版本并检查更新:
mcp__notebooklm-mcp__server_info()
# 返回:version, latest_version, update_available, update_command
nlm --version # 显示版本和更新可用性
使用 chat_configure 并指定 goal:default/learning_guide/custom。使用 note 并指定 action:create/list/update/delete。删除需要 confirm=True。
⚠️ AI 工具:请勿使用
nlm chat start- 它会启动一个无法以编程方式控制的交互式 REPL。请改用nlm notebook query进行一次性问答。
对于在终端上的人类用户:
nlm chat start <nb-id> # 启动交互式 REPL
REPL 命令:
/sources - 列出可用来源/clear - 重置对话上下文/help - 显示命令/exit - 退出 REPL配置聊天行为(对 REPL 和查询都有效):
nlm chat configure <id> --goal default
nlm chat configure <id> --goal learning_guide
nlm chat configure <id> --goal custom --prompt "扮演导师..."
nlm chat configure <id> --response-length longer # longer, default, shorter
笔记管理:
nlm note create <nb-id> "内容" --title "标题"
nlm note list <nb-id>
nlm note update <nb-id> <note-id> --content "新内容"
nlm note delete <nb-id> <note-id> --confirm
使用 notebook_share_status 进行检查,使用 notebook_share_public 启用/禁用公共链接,使用 notebook_share_invite 并指定 email 和 role:viewer/editor。
# 检查共享状态
nlm share status <nb-id>
# 启用/禁用公共链接
nlm share public <nb-id> # 启用
nlm share public <nb-id> --off # 禁用
# 邀请协作者
nlm share invite <nb-id> user@example.com
nlm share invite <nb-id> user@example.com --role editor
简化长的 UUID:
nlm alias set myproject abc123-def456... # 创建别名(自动检测类型)
nlm alias get myproject # 解析为 UUID
nlm alias list # 列出所有别名
nlm alias delete myproject # 删除别名
# 在任何地方使用别名
nlm notebook get myproject
nlm source list myproject
nlm audio create myproject --confirm
用于管理设置的 CLI 专用命令:
nlm config show # 显示当前配置
nlm config get <key> # 获取特定设置
nlm config set <key> <value> # 更新设置
nlm config set output.format json # 更改默认输出
# 对于切换配置文件,更推荐使用以下简单命令:
nlm login switch work # 切换默认配置文件
可用设置:
| 键 | 默认值 | 描述 |
|---|---|---|
output.format | table | 默认输出格式(table, json) |
output.color | true | 启用彩色输出 |
output.short_ids | true | 显示缩短的 ID |
auth.browser | auto | 登录的首选浏览器(auto, chrome, arc, brave, edge, chromium, vivaldi, opera) |
auth.default_profile | default | 未指定 --profile 时使用的配置文件 |
为各种 AI 助手管理 NotebookLM 技能安装:
nlm skill list # 显示安装状态
nlm skill update # 更新所有过时的技能
nlm skill update <tool> # 更新特定技能(例如,claude-code)
nlm skill install <tool> # 安装技能
nlm skill uninstall <tool> # 卸载技能
动词在前别名:nlm update skill, nlm list skills, nlm install skill
大多数列表命令支持多种格式:
| 标志 | 描述 |
|---|---|
| (无) | 丰富的表格(人类可读) |
--json | JSON 输出(用于解析) |
--quiet | 仅 ID(用于管道传输) |
--title | "ID: 标题" 格式 |
--url | "ID: URL" 格式(仅限来源) |
--full | 所有列/详细信息 |
跨多个笔记本同时执行相同的操作。
使用 batch 并指定 action 参数。通过 notebook_names, tags 或 all=True 选择笔记本。
batch(action="query", query="关键发现是什么?", notebook_names="AI Research, Dev Tools")
batch(action="add_source", source_url="https://example.com", tags="ai,research")
batch(action="create", titles="项目 A, 项目 B, 项目 C")
batch(action="delete", notebook_names="旧项目", confirm=True)
batch(action="studio", artifact_type="audio", tags="research", confirm=True)
nlm batch query "关键要点是什么?" --notebooks "id1,id2"
nlm batch query "总结" --tags "ai,research" # 按标签查询
nlm batch query "总结" --all # 查询所有笔记本
nlm batch add-source --url "https://..." --notebooks "id1,id2"
nlm batch create "项目 A, 项目 B, 项目 C" # 创建多个
nlm batch delete --notebooks "id1,id2" --confirm # 删除多个
nlm batch studio --type audio --tags "research" --confirm # 跨笔记本生成
查询多个笔记本并获取带有每个笔记本引用的聚合答案。
cross_notebook_query(query="比较方法", notebook_names="笔记本 A, 笔记本 B")
cross_notebook_query(query="总结", tags="ai,research")
cross_notebook_query(query="所有内容", all=True)
nlm cross query "讨论了哪些功能?" --notebooks "id1,id2"
nlm cross query "比较方法" --tags "ai,research"
nlm cross query "总结所有内容" --all
定义和执行多步骤笔记本工作流。三个内置流水线加上对自定义 YAML 流水线的支持。
pipeline(action="list") # 列出可用流水线
pipeline(action="run", notebook_id="...", pipeline_name="ingest-and-podcast", input_url="https://...")
nlm pipeline list # 列出可用流水线
nlm pipeline run <notebook> ingest-and-podcast --url "https://..."
nlm pipeline run <notebook> research-and-report --url "https://..."
nlm pipeline run <notebook> multi-format # 音频 + 报告 + 闪卡
内置流水线: ingest-and-podcast, research-and-report, multi-format
创建自定义流水线:将 YAML 文件添加到 ~/.notebooklm-mcp-cli/pipelines/
为笔记本添加标签以进行组织,并使用标签来定位批量操作。
tag(action="add", notebook_id="...", tags="ai,research,llm")
tag(action="remove", notebook_id="...", tags="ai")
tag(action="list") # 列出所有已标记的笔记本
tag(action="select", query="ai research") # 通过标签匹配查找笔记本
nlm tag add <notebook> --tags "ai,research,llm" # 添加标签
nlm tag add <notebook> --tags "ai" --title "我的笔记本" # 带显示标题
nlm tag remove <notebook> --tags "ai" # 移除标签
nlm tag list # 列出所有已标记的笔记本
nlm tag select "ai research" # 通过标签匹配查找笔记本
nlm notebook create "AI Research 2026" # 捕获 ID
nlm alias set ai <notebook-id>
nlm research start "agentic AI trends" --notebook-id ai --mode deep
nlm research status ai --max-wait 300 # 等待最多 5 分钟
nlm research import ai <task-id> # 导入所有来源
nlm audio create ai --format deep_dive --confirm
nlm studio status ai # 检查生成进度
nlm source add <id> --url "https://example1.com"
nlm source add <id> --url "https://example2.com"
nlm source add <id> --text "我的笔记..." --title "笔记"
nlm source list <id>
nlm report create <id> --format "Study Guide" --confirm
nlm quiz create <id> --count 10 --difficulty 3 --focus "考试准备" --confirm
nlm flashcards create <id> --difficulty medium --focus "核心术语" --confirm
nlm source add <id> --drive 1KQH3eW0hMBp7WK... --type slides
# ... 时间流逝,文档被编辑 ...
nlm source stale <id> # 检查新鲜度
nlm source sync <id> --confirm # 如果过时则同步
# 为笔记本添加标签以进行组织
nlm tag add <id1> --tags "ai,research"
nlm tag add <id2> --tags "ai,product"
# 跨标记的笔记本进行查询
nlm cross query "主要结论是什么?" --tags "ai"
# 为所有标记的笔记本批量生成播客
nlm batch studio --type audio --tags "ai" --confirm
# 在单个笔记本上运行流水线
nlm pipeline run <id> ingest-and-podcast --url "https://example.com"
| 错误 | 原因 | 解决方案 |
|---|---|---|
| "Cookies have expired" | 会话超时 | nlm login |
| "authentication may have expired" | 会话超时 | nlm login |
| "Notebook not found" | 无效的 ID | nlm notebook list |
| "Source not found" | 无效的 ID | nlm source list <nb-id> |
| "Rate limit exceeded" | 调用过多 | 等待 30 秒,重试 |
| "Research already in progress" | 待处理的研究 | 使用 --force 或先导入 |
| "Import timed out" | 来源过多 | 对于较大的笔记本,使用 --timeout 600 |
| "Google API error code 3" | 深度研究的瞬时错误 | 几分钟后重试,或使用 --mode fast |
| 浏览器未启动 | 端口冲突 | 关闭浏览器,重试 |
在操作之间等待以避免速率限制:
有关详细信息,请参阅:
每周安装次数
78
仓库
GitHub 星标数
3.1K
首次出现
2026年2月20日
安全审计
安装于
gemini-cli76
cursor73
codex73
opencode73
amp72
github-copilot72
This skill provides comprehensive guidance for using NotebookLM via both the nlm CLI and MCP tools.
ALWAYS check which tools are available before proceeding:
mcp__notebooklm-mcp__* or mcp_notebooklm_*nlm CLI commands via BashDecision Logic:
has_mcp_tools = check_available_tools() # Look for mcp__notebooklm-mcp__* or mcp_notebooklm_*
has_cli = check_bash_available() # Can run nlm commands
if has_mcp_tools and has_cli:
# ASK USER: "I can use either MCP tools or the nlm CLI. Which do you prefer?"
user_preference = ask_user()
else if has_mcp_tools:
# Use MCP tools directly
mcp__notebooklm-mcp__notebook_list()
else:
# Use CLI via Bash
bash("nlm notebook list")
This skill documents BOTH approaches. Choose the appropriate one based on tool availability and user preference.
Runnlm --ai to get comprehensive AI-optimized documentation - this provides a complete view of all CLI capabilities.
nlm --help # List all commands
nlm <command> --help # Help for specific command
nlm --ai # Full AI-optimized documentation (RECOMMENDED)
nlm --version # Check installed version
nlm login before any operationsnlm login if commands start failing--confirm is REQUIRED: All generation and delete commands need --confirm or -y (CLI) or confirm=True (MCP)--notebook-id: The flag is mandatory, not positionalUse this to determine the right sequence of commands:
User wants to...
│
├─► Work with NotebookLM for the first time
│ └─► nlm login → nlm notebook create "Title"
│
├─► Add content to a notebook
│ ├─► From a URL/webpage → nlm source add <nb-id> --url "https://..."
│ ├─► From YouTube → nlm source add <nb-id> --url "https://youtube.com/..."
│ ├─► From pasted text → nlm source add <nb-id> --text "content" --title "Title"
│ ├─► From Google Drive → nlm source add <nb-id> --drive <doc-id> --type doc
│ └─► Discover new sources → nlm research start "query" --notebook-id <nb-id>
│
├─► Generate content from sources
│ ├─► Podcast/Audio → nlm audio create <nb-id> --confirm
│ ├─► Written summary → nlm report create <nb-id> --confirm
│ ├─► Study materials → nlm quiz/flashcards create <nb-id> --confirm
│ ├─► Visual content → nlm mindmap/slides/infographic create <nb-id> --confirm
│ ├─► Video → nlm video create <nb-id> --confirm
│ └─► Extract data → nlm data-table create <nb-id> "description" --confirm
│
├─► Ask questions about sources
│ └─► nlm notebook query <nb-id> "question"
│ (Use --conversation-id for follow-ups)
│ ⚠️ Do NOT use `nlm chat start` - it's a REPL for humans only
│
├─► Check generation status
│ └─► nlm studio status <nb-id>
│
└─► Manage/cleanup
├─► List notebooks → nlm notebook list
├─► List sources → nlm source list <nb-id>
├─► Delete source → nlm source delete <source-id> --confirm
└─► Delete notebook → nlm notebook delete <nb-id> --confirm
If using MCP tools and encountering authentication errors:
# Run the CLI authentication (works for both CLI and MCP)
nlm login
# Then reload tokens in MCP
mcp__notebooklm-mcp__refresh_auth()
Or manually save cookies via MCP (fallback):
# Extract cookies from Chrome DevTools and save
mcp__notebooklm-mcp__save_auth_tokens(cookies="<cookie_header>")
#### CLI Authentication
```bash
nlm login # Launch browser, extract cookies (primary method)
nlm login --check # Validate current session
nlm login --profile work # Use named profile for multiple accounts
nlm login --provider openclaw --cdp-url http://127.0.0.1:18800 # External CDP provider
nlm login switch <profile> # Switch the default profile
nlm login profile list # List all profiles with email addresses
nlm login profile delete <name> # Delete a profile
nlm login profile rename <old> <new> # Rename a profile
Multi-Profile Support : Each profile gets its own isolated browser session (supports Chrome, Arc, Brave, Edge, Chromium, and more), so you can be logged into multiple Google accounts simultaneously.
Session lifetime : ~20 minutes. Re-authenticate when commands fail with auth errors.
Switching MCP Accounts : The MCP server always uses the active default profile. If you need to switch which Google account the MCP server is communicating with, you MUST use the CLI: run nlm login switch <name>. Your next MCP tool call will instantly use the new account.
Note : Both MCP and CLI share the same authentication backend, so authenticating with one works for both.
Use tools: notebook_list, notebook_create, notebook_get, notebook_describe, notebook_query, notebook_rename, notebook_delete. All accept notebook_id parameter. Delete requires confirm=True.
nlm notebook list # List all notebooks
nlm notebook list --json # JSON output for parsing
nlm notebook list --quiet # IDs only (for scripting)
nlm notebook create "Title" # Create notebook, returns ID
nlm notebook get <id> # Get notebook details
nlm notebook describe <id> # AI-generated summary + suggested topics
nlm notebook query <id> "question" # One-shot Q&A with sources
nlm notebook rename <id> "New Title" # Rename notebook
nlm notebook delete <id> --confirm # PERMANENT deletion
Use source_add with these source_type values:
url - Web page or YouTube URL (url param)text - Pasted content (text + title params)file - Local file upload (file_path param)drive - Google Drive doc (document_id + doc_type params)Other tools: source_list_drive, source_describe, source_get_content, source_rename, source_sync_drive (requires confirm=True), source_delete (requires confirm=True).
# Adding sources
nlm source add <nb-id> --url "https://..." # Web page
nlm source add <nb-id> --url "https://youtube.com/..." # YouTube video
nlm source add <nb-id> --text "content" --title "X" # Pasted text
nlm source add <nb-id> --drive <doc-id> # Drive doc (auto-detect type)
nlm source add <nb-id> --drive <doc-id> --type slides # Explicit type
# Listing and viewing
nlm source list <nb-id> # Table of sources
nlm source list <nb-id> --drive # Show Drive sources with freshness
nlm source list <nb-id> --drive -S # Skip freshness checks (faster)
nlm source get <source-id> # Source metadata
nlm source describe <source-id> # AI summary + keywords
nlm source content <source-id> # Raw text content
nlm source content <source-id> -o file.txt # Export to file
# Drive sync (for stale sources)
nlm source stale <nb-id> # List outdated Drive sources
nlm source sync <nb-id> --confirm # Sync all stale sources
nlm source sync <nb-id> --source-ids <ids> --confirm # Sync specific
# Rename
nlm source rename <source-id> "New Title" --notebook <nb-id>
nlm rename source <source-id> "New Title" --notebook <nb-id> # verb-first
# Deletion
nlm source delete <source-id> --confirm
Drive types : doc, slides, sheets, pdf
Research finds NEW sources from the web or Google Drive.
Use research_start with:
source: web or drivemode: fast (~30s) or deep (~5min, web only)Workflow: research_start → poll research_status → research_import
# Start research (--notebook-id is REQUIRED)
nlm research start "query" --notebook-id <id> # Fast web (~30s)
nlm research start "query" --notebook-id <id> --mode deep # Deep web (~5min)
nlm research start "query" --notebook-id <id> --source drive # Drive search
# Check progress
nlm research status <nb-id> # Poll until done (5min max)
nlm research status <nb-id> --max-wait 0 # Single check, no waiting
nlm research status <nb-id> --task-id <tid> # Check specific task
nlm research status <nb-id> --full # Full details
# Import discovered sources
nlm research import <nb-id> <task-id> # Import all
nlm research import <nb-id> <task-id> --indices 0,2,5 # Import specific
nlm research import <nb-id> <task-id> --timeout 600 # Custom timeout (default: 300s)
Modes : fast (~30s, ~10 sources) | deep (~5min, ~40+ sources, web only)
Use studio_create with artifact_type and type-specific options. All require confirm=True.
| artifact_type | Key Options |
|---|---|
audio | audio_format: deep_dive/brief/critique/debate, audio_length: short/default/long |
video | video_format: explainer/brief, visual_style: auto_select/classic/whiteboard/kawaii/anime/watercolor/retro_print/heritage/paper_craft |
report | report_format: Briefing Doc/Study Guide/Blog Post/Create Your Own, |
Common options : source_ids, language (BCP-47 code), focus_prompt
Revise Slides: Use studio_revise to revise individual slides in an existing slide deck.
artifact_id (from studio_status) and slide_instructionsstudio_status after calling to check when the new deck is readyAll generation commands share these flags:
--confirm or -y: REQUIRED to execute
--source-ids <id1,id2>: Limit to specific sources
--language <code>: BCP-47 code (en, es, fr, de, ja)
nlm audio create <id> --confirm nlm audio create <id> --format deep_dive --length default --confirm nlm audio create <id> --format brief --focus "key topic" --confirm
nlm report create <id> --confirm nlm report create <id> --format "Study Guide" --confirm nlm report create <id> --format "Create Your Own" --prompt "Custom..." --confirm
nlm quiz create <id> --confirm nlm quiz create <id> --count 5 --difficulty 3 --confirm nlm quiz create <id> --count 10 --difficulty 3 --focus "Focus on key concepts" --confirm
Use studio_status to check progress (or rename with action="rename"). Use download_artifact with artifact_type and output_path. Use export_artifact with export_type: docs/sheets. Delete with studio_delete (requires confirm=True).
# Check status
nlm studio status <nb-id> # List all artifacts
nlm studio status <nb-id> --full # Show full details (including custom prompts)
nlm studio status <nb-id> --json # JSON output
# Download artifacts
nlm download audio <nb-id> --output podcast.mp3
nlm download video <nb-id> --output video.mp4
nlm download report <nb-id> --output report.md
nlm download slide-deck <nb-id> --output slides.pdf # PDF (default)
nlm download slide-deck <nb-id> --output slides.pptx --format pptx # PPTX
nlm download quiz <nb-id> --output quiz.json --format json
# Export to Google Docs/Sheets
nlm export sheets <nb-id> <artifact-id> --title "My Data Table"
nlm export docs <nb-id> <artifact-id> --title "My Report"
# Delete artifact
nlm studio delete <nb-id> <artifact-id> --confirm
Status values : completed (✓), in_progress (●), failed (✗)
Prompt Extraction : The studio_status tool returns a custom_instructions field for each artifact. This contains the original focus prompt or custom instructions used to generate that artifact (e.g., the prompt for a "Create Your Own" report, or the focus topic for an Audio Overview). This is useful for retrieving the exact prompt that generated a successful artifact.
MCP Tool: source_rename(notebook_id, source_id, new_title)
CLI:
nlm source rename <source-id> "New Title" --notebook <notebook-id>
nlm rename source <source-id> "New Title" --notebook <notebook-id> # verb-first
Use studio_status with action="rename", artifact_id, and new_title.
nlm studio rename <artifact-id> "New Title"
nlm rename studio <artifact-id> "New Title" # verb-first alternative
Use server_info to get version and check for updates:
mcp__notebooklm-mcp__server_info()
# Returns: version, latest_version, update_available, update_command
nlm --version # Shows version and update availability
Use chat_configure with goal: default/learning_guide/custom. Use note with action: create/list/update/delete. Delete requires confirm=True.
⚠️ AI TOOLS: DO NOT USE
nlm chat start- It launches an interactive REPL that cannot be controlled programmatically. Usenlm notebook queryfor one-shot Q&A instead.
For human users at a terminal:
nlm chat start <nb-id> # Launch interactive REPL
REPL Commands :
/sources - List available sources/clear - Reset conversation context/help - Show commands/exit - Exit REPLConfigure chat behavior (works for both REPL and query):
nlm chat configure <id> --goal default
nlm chat configure <id> --goal learning_guide
nlm chat configure <id> --goal custom --prompt "Act as a tutor..."
nlm chat configure <id> --response-length longer # longer, default, shorter
Notes management :
nlm note create <nb-id> "Content" --title "Title"
nlm note list <nb-id>
nlm note update <nb-id> <note-id> --content "New content"
nlm note delete <nb-id> <note-id> --confirm
Use notebook_share_status to check, notebook_share_public to enable/disable public link, notebook_share_invite with email and role: viewer/editor.
# Check sharing status
nlm share status <nb-id>
# Enable/disable public link
nlm share public <nb-id> # Enable
nlm share public <nb-id> --off # Disable
# Invite collaborator
nlm share invite <nb-id> user@example.com
nlm share invite <nb-id> user@example.com --role editor
Simplify long UUIDs:
nlm alias set myproject abc123-def456... # Create alias (auto-detects type)
nlm alias get myproject # Resolve to UUID
nlm alias list # List all aliases
nlm alias delete myproject # Remove alias
# Use aliases anywhere
nlm notebook get myproject
nlm source list myproject
nlm audio create myproject --confirm
CLI-only commands for managing settings:
nlm config show # Show current config
nlm config get <key> # Get specific setting
nlm config set <key> <value> # Update setting
nlm config set output.format json # Change default output
# For switching profiles, prefer the simpler command:
nlm login switch work # Switch default profile
Available Settings:
| Key | Default | Description |
|---|---|---|
output.format | table | Default output format (table, json) |
output.color | true | Enable colored output |
output.short_ids | true | Show shortened IDs |
auth.browser |
Manage the NotebookLM skill installation for various AI assistants:
nlm skill list # Show installation status
nlm skill update # Update all outdated skills
nlm skill update <tool> # Update specific skill (e.g., claude-code)
nlm skill install <tool> # Install skill
nlm skill uninstall <tool> # Uninstall skill
Verb-first aliases : nlm update skill, nlm list skills, nlm install skill
Most list commands support multiple formats:
| Flag | Description |
|---|---|
| (none) | Rich table (human-readable) |
--json | JSON output (for parsing) |
--quiet | IDs only (for piping) |
--title | "ID: Title" format |
--url | "ID: URL" format (sources only) |
--full | All columns/details |
Perform the same action across multiple notebooks at once.
Use batch with action parameter. Select notebooks by notebook_names, tags, or all=True.
batch(action="query", query="What are the key findings?", notebook_names="AI Research, Dev Tools")
batch(action="add_source", source_url="https://example.com", tags="ai,research")
batch(action="create", titles="Project A, Project B, Project C")
batch(action="delete", notebook_names="Old Project", confirm=True)
batch(action="studio", artifact_type="audio", tags="research", confirm=True)
nlm batch query "What are the key takeaways?" --notebooks "id1,id2"
nlm batch query "Summarize" --tags "ai,research" # Query by tag
nlm batch query "Summarize" --all # Query ALL notebooks
nlm batch add-source --url "https://..." --notebooks "id1,id2"
nlm batch create "Project A, Project B, Project C" # Create multiple
nlm batch delete --notebooks "id1,id2" --confirm # Delete multiple
nlm batch studio --type audio --tags "research" --confirm # Generate across notebooks
Query multiple notebooks and get aggregated answers with per-notebook citations.
cross_notebook_query(query="Compare approaches", notebook_names="Notebook A, Notebook B")
cross_notebook_query(query="Summarize", tags="ai,research")
cross_notebook_query(query="Everything", all=True)
nlm cross query "What features are discussed?" --notebooks "id1,id2"
nlm cross query "Compare approaches" --tags "ai,research"
nlm cross query "Summarize everything" --all
Define and execute multi-step notebook workflows. Three built-in pipelines plus support for custom YAML pipelines.
pipeline(action="list") # List available pipelines
pipeline(action="run", notebook_id="...", pipeline_name="ingest-and-podcast", input_url="https://...")
nlm pipeline list # List available pipelines
nlm pipeline run <notebook> ingest-and-podcast --url "https://..."
nlm pipeline run <notebook> research-and-report --url "https://..."
nlm pipeline run <notebook> multi-format # Audio + report + flashcards
Built-in pipelines: ingest-and-podcast, research-and-report, multi-format
Create custom pipelines: add YAML files to ~/.notebooklm-mcp-cli/pipelines/
Tag notebooks for organization and use tags to target batch operations.
tag(action="add", notebook_id="...", tags="ai,research,llm")
tag(action="remove", notebook_id="...", tags="ai")
tag(action="list") # List all tagged notebooks
tag(action="select", query="ai research") # Find notebooks by tag match
nlm tag add <notebook> --tags "ai,research,llm" # Add tags
nlm tag add <notebook> --tags "ai" --title "My Notebook" # With display title
nlm tag remove <notebook> --tags "ai" # Remove tags
nlm tag list # List all tagged notebooks
nlm tag select "ai research" # Find notebooks by tag match
nlm notebook create "AI Research 2026" # Capture ID
nlm alias set ai <notebook-id>
nlm research start "agentic AI trends" --notebook-id ai --mode deep
nlm research status ai --max-wait 300 # Wait up to 5 min
nlm research import ai <task-id> # Import all sources
nlm audio create ai --format deep_dive --confirm
nlm studio status ai # Check generation progress
nlm source add <id> --url "https://example1.com"
nlm source add <id> --url "https://example2.com"
nlm source add <id> --text "My notes..." --title "Notes"
nlm source list <id>
nlm report create <id> --format "Study Guide" --confirm
nlm quiz create <id> --count 10 --difficulty 3 --focus "Exam prep" --confirm
nlm flashcards create <id> --difficulty medium --focus "Core terms" --confirm
nlm source add <id> --drive 1KQH3eW0hMBp7WK... --type slides
# ... time passes, document is edited ...
nlm source stale <id> # Check freshness
nlm source sync <id> --confirm # Sync if stale
# Tag notebooks for organization
nlm tag add <id1> --tags "ai,research"
nlm tag add <id2> --tags "ai,product"
# Query across tagged notebooks
nlm cross query "What are the main conclusions?" --tags "ai"
# Batch generate podcasts for all tagged notebooks
nlm batch studio --type audio --tags "ai" --confirm
# Run a pipeline on a single notebook
nlm pipeline run <id> ingest-and-podcast --url "https://example.com"
| Error | Cause | Solution |
|---|---|---|
| "Cookies have expired" | Session timeout | nlm login |
| "authentication may have expired" | Session timeout | nlm login |
| "Notebook not found" | Invalid ID | nlm notebook list |
| "Source not found" | Invalid ID | nlm source list <nb-id> |
| "Rate limit exceeded" | Too many calls | Wait 30s, retry |
| "Research already in progress" | Pending research |
Wait between operations to avoid rate limits:
For detailed information, see:
Weekly Installs
78
Repository
GitHub Stars
3.1K
First Seen
Feb 20, 2026
Security Audits
Gen Agent Trust HubPassSocketWarnSnykFail
Installed on
gemini-cli76
cursor73
codex73
opencode73
amp72
github-copilot72
AI 代码实施计划编写技能 | 自动化开发任务分解与 TDD 流程规划工具
50,900 周安装
nlm alias set <name> <uuid>nlm alias list before creating a new alias to avoid conflicts with existing names.nlm chat start - it opens an interactive REPL that AI tools cannot control. Use nlm notebook query for one-shot Q&A instead.--quiet to capture IDs for piping. Only use --json when you need to parse specific fields programmatically.--help when unsure: Run nlm <command> --help to see available options and flags for any command.custom_promptquiz | question_count, difficulty: easy/medium/hard |
flashcards | difficulty: easy/medium/hard |
mind_map | title |
slide_deck | slide_format: detailed_deck/presenter_slides, slide_length: short/default |
infographic | orientation: landscape/portrait/square, detail_level: concise/standard/detailed, infographic_style: auto_select/sketch_note/professional/bento_grid/editorial/instructional/bricks/clay/anime/kawaii/scientific |
data_table | description (REQUIRED) |
nlm flashcards create <id> --confirm nlm flashcards create <id> --difficulty hard --confirm nlm flashcards create <id> --difficulty medium --focus "Focus on definitions" --confirm
nlm mindmap create <id> --confirm nlm mindmap create <id> --title "Topic Overview" --confirm nlm mindmap list <id> # List existing mind maps
nlm slides create <id> --confirm nlm slides create <id> --format presenter --length short --confirm
nlm slides revise <artifact-id> --slide '1 Make the title larger' --confirm
nlm infographic create <id> --confirm nlm infographic create <id> --orientation portrait --detail detailed --style professional --confirm
nlm video create <id> --confirm nlm video create <id> --format brief --style whiteboard --confirm
nlm data-table create <id> "Extract all dates and events" --confirm
auto |
| Preferred browser for login (auto, chrome, arc, brave, edge, chromium, vivaldi, opera) |
auth.default_profile | default | Profile to use when --profile not specified |
Use --force or import first |
| "Import timed out" | Too many sources | Use --timeout 600 for larger notebooks |
| "Google API error code 3" | Transient deep research error | Retry in a few minutes, or use --mode fast |
| Browser doesn't launch | Port conflict | Close browser, retry |