doc-coauthoring by anthropics/skills
npx skills add https://github.com/anthropics/skills --skill doc-coauthoring本技能提供了一个结构化的工作流程,指导用户完成协作文档创作。请扮演一个积极的引导者,带领用户完成三个阶段:背景信息收集、精炼与结构化以及读者测试。
触发条件:
初始提议: 向用户提供一个用于协同创作文档的结构化工作流程。解释三个阶段:
解释这种方法有助于确保文档在他人阅读时(包括将其粘贴到 Claude 中时)效果良好。询问他们是想尝试此工作流程,还是更喜欢自由创作。
如果用户拒绝,则自由创作。如果用户接受,则进入第 1 阶段。
目标: 缩小用户所知与 Claude 所知之间的差距,以便后续提供智能指导。
首先询问用户关于文档的元信息:
告知他们可以用简写方式回答,或者以最适合他们的方式倾倒信息。
如果用户提供了模板或提到了文档类型:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
如果用户提到要编辑现有的共享文档:
一旦回答了初始问题,鼓励用户倾倒他们掌握的所有背景信息。请求提供以下信息:
建议他们不必担心组织信息——只需全部倾倒出来。提供多种提供背景信息的方式:
如果有可用的集成(例如 Slack、Teams、Google Drive、SharePoint 或其他 MCP 服务器),请提及这些可以用来直接拉取背景信息。
如果未检测到集成,并且在 Claude.ai 或 Claude 应用中: 建议他们可以在 Claude 设置中启用连接器,以便直接从消息应用和文档存储中拉取背景信息。
告知他们,在他们完成初始信息倾倒后,会提出澄清性问题。
在背景信息收集期间:
提出澄清性问题:
当用户示意他们已完成初始信息倾倒(或在提供了大量背景信息后),提出澄清性问题以确保理解:
根据背景信息中的空白生成 5-10 个编号问题。
告知他们可以用简写方式回答(例如,"1: 是,2: 见 #频道,3: 不,因为向后兼容"),链接到更多文档,指向需要阅读的频道,或者继续倾倒信息。任何对他们来说最高效的方式都可以。
退出条件: 当提出的问题显示出理解时——即能够询问边缘情况和权衡取舍而无需解释基础知识时,就表示已收集到足够的背景信息。
过渡: 询问在此阶段他们是否还想提供更多背景信息,或者是否该继续起草文档了。
如果用户想添加更多,请允许他们添加。准备就绪后,进入第 2 阶段。
目标: 通过头脑风暴、筛选和迭代精炼,逐节构建文档。
给用户的说明: 解释文档将逐节构建。对于每个部分:
从未知信息最多的部分开始(通常是核心决策/提案),然后处理其余部分。
章节顺序:
如果文档结构清晰:询问他们想从哪个部分开始。
建议从未知信息最多的部分开始。对于决策文档,通常是核心提案。对于规格说明,通常是技术方法。摘要部分最好留到最后。
如果用户不知道需要哪些部分:根据文档类型和模板,建议 3-5 个适合该文档类型的部分。
询问这个结构是否可行,或者他们是否想调整它。
一旦结构达成一致:
创建初始文档结构,并为所有部分添加占位符文本。
如果有访问工件的权限: 使用 create_file 创建一个工件。这为 Claude 和用户提供了一个可以协作的脚手架。
告知他们将创建包含所有章节标题和占位符文本的初始结构。
创建包含所有章节标题和简短占位符文本(如"[待写]"或"[内容在此]")的工件。
提供脚手架链接,并表明是时候填充每个部分了。
如果没有访问工件的权限: 在工作目录中创建一个 markdown 文件。适当命名(例如 decision-doc.md、technical-spec.md)。
告知他们将创建包含所有章节标题和占位符文本的初始结构。
创建包含所有章节标题和占位符文本的文件。
确认文件名已创建,并表明是时候填充每个部分了。
对于每个部分:
宣布将开始处理 [章节名称] 部分。提出 5-10 个关于应包含内容的澄清性问题:
根据背景信息和章节目的生成 5-10 个具体问题。
告知他们可以用简写方式回答,或者只是指出需要涵盖的重要内容。
对于 [章节名称] 部分,根据章节的复杂性,头脑风暴 [5-20] 个可能包含的内容。寻找:
根据章节复杂性生成 5-20 个编号选项。最后,如果他们想要更多选项,可以提出进行更多头脑风暴。
询问哪些要点应该保留、删除或合并。请求简要说明理由,以帮助了解后续章节的优先级。
提供示例:
如果用户给出自由形式的反馈(例如,"看起来不错" 或 "我喜欢大部分,但是……")而不是编号选择,则提取他们的偏好并继续。解析他们想要保留/删除/更改的内容并应用它。
根据他们选择的内容,询问 [章节名称] 部分是否遗漏了任何重要内容。
使用 str_replace 将该部分的占位符文本替换为实际起草的内容。
宣布现在将根据他们选择的内容起草 [章节名称] 部分。
如果使用工件: 起草后,提供工件的链接。
请他们通读并指出需要更改的地方。注意,具体说明有助于学习后续章节的风格。
如果使用文件(无工件): 起草后,确认完成。
告知他们 [章节名称] 部分已在 [文件名] 中起草完成。请他们通读并指出需要更改的地方。注意,具体说明有助于学习后续章节的风格。
给用户的关键说明(在起草第一部分时包含): 提供一个说明:请他们指出需要更改的内容,而不是直接编辑文档。这有助于学习他们未来章节的风格。例如:"删除 X 要点 - 已由 Y 涵盖" 或 "使第三段更简洁"。
当用户提供反馈时:
str_replace 进行编辑(切勿重新打印整个文档)继续迭代,直到用户对该部分满意为止。
在连续 3 次迭代没有实质性更改后,询问是否可以删除任何内容而不丢失重要信息。
当章节完成后,确认 [章节名称] 已完成。询问是否准备好进入下一部分。
对所有部分重复此过程。
当接近完成(80% 以上的部分完成)时,宣布将重新阅读整个文档并检查:
阅读整个文档并提供反馈。
当所有部分都起草并精炼完毕: 宣布所有部分都已起草完成。表明将再次审阅完整文档。
审阅整体连贯性、流畅性、完整性。
提供任何最终建议。
询问是准备好进入读者测试,还是想进一步精炼任何内容。
目标: 用一个全新的 Claude(无上下文泄露)测试文档,以验证其对读者有效。
给用户的说明: 解释现在将进行测试,以查看文档是否真的对读者有效。这可以发现盲点——那些对作者来说有意义但可能使他人困惑的内容。
如果有访问子代理的权限(例如在 Claude Code 中):
无需用户参与,直接执行测试。
宣布将预测读者在尝试发现此文档时可能会问的问题。
生成 5-10 个读者可能会真实提出的问题。
宣布将用一个全新的 Claude 实例(没有来自此对话的上下文)测试这些问题。
对于每个问题,调用一个仅包含文档内容和该问题的子代理。
总结读者 Claude 对每个问题回答正确/错误的地方。
宣布将执行额外检查。
调用子代理来检查是否存在歧义、错误假设、矛盾。
总结发现的任何问题。
如果发现问题:报告读者 Claude 在特定问题上遇到了困难。
列出具体问题。
表明将修复这些空白。
对有问题的部分循环回精炼阶段。
如果没有访问子代理的权限(例如 claude.ai 网页界面):
用户需要手动进行测试。
询问人们在尝试发现此文档时可能会问什么问题。他们会向 Claude.ai 输入什么?
生成 5-10 个读者可能会真实提出的问题。
提供测试说明:
对于每个问题,指示读者 Claude 提供:
检查读者 Claude 是否给出了正确答案或误解了任何内容。
同时询问读者 Claude:
询问读者 Claude 回答错误或感到困难的地方。表明将修复这些空白。
对有问题的部分循环回精炼阶段。
当读者 Claude 持续正确回答问题,并且没有发现新的空白或歧义时,文档就准备好了。
当读者测试通过时:宣布文档已通过读者 Claude 测试。在完成之前:
询问他们是否想要再进行一次审阅,还是工作已经完成。
如果用户想要最终审阅,请提供。否则: 宣布文档完成。提供一些最终提示:
语气:
处理偏差:
上下文管理:
工件管理:
create_file 来起草完整章节。str_replace 进行所有编辑。质量重于速度:
每周安装
13.6K
仓库
GitHub 星标
90.8K
首次出现
2026 年 1 月 20 日
安全审计
安装于
opencode10.4K
gemini-cli9.9K
codex9.7K
claude-code9.6K
cursor8.8K
github-copilot8.5K
This skill provides a structured workflow for guiding users through collaborative document creation. Act as an active guide, walking users through three stages: Context Gathering, Refinement & Structure, and Reader Testing.
Trigger conditions:
Initial offer: Offer the user a structured workflow for co-authoring the document. Explain the three stages:
Explain that this approach helps ensure the doc works well when others read it (including when they paste it into Claude). Ask if they want to try this workflow or prefer to work freeform.
If user declines, work freeform. If user accepts, proceed to Stage 1.
Goal: Close the gap between what the user knows and what Claude knows, enabling smart guidance later.
Start by asking the user for meta-context about the document:
Inform them they can answer in shorthand or dump information however works best for them.
If user provides a template or mentions a doc type:
If user mentions editing an existing shared document:
Once initial questions are answered, encourage the user to dump all the context they have. Request information such as:
Advise them not to worry about organizing it - just get it all out. Offer multiple ways to provide context:
If integrations are available (e.g., Slack, Teams, Google Drive, SharePoint, or other MCP servers), mention that these can be used to pull in context directly.
If no integrations are detected and in Claude.ai or Claude app: Suggest they can enable connectors in their Claude settings to allow pulling context from messaging apps and document storage directly.
Inform them clarifying questions will be asked once they've done their initial dump.
During context gathering:
If user mentions team channels or shared documents:
If user mentions entities/projects that are unknown:
As user provides context, track what's being learned and what's still unclear
Asking clarifying questions:
When user signals they've done their initial dump (or after substantial context provided), ask clarifying questions to ensure understanding:
Generate 5-10 numbered questions based on gaps in the context.
Inform them they can use shorthand to answer (e.g., "1: yes, 2: see #channel, 3: no because backwards compat"), link to more docs, point to channels to read, or just keep info-dumping. Whatever's most efficient for them.
Exit condition: Sufficient context has been gathered when questions show understanding - when edge cases and trade-offs can be asked about without needing basics explained.
Transition: Ask if there's any more context they want to provide at this stage, or if it's time to move on to drafting the document.
If user wants to add more, let them. When ready, proceed to Stage 2.
Goal: Build the document section by section through brainstorming, curation, and iterative refinement.
Instructions to user: Explain that the document will be built section by section. For each section:
Start with whichever section has the most unknowns (usually the core decision/proposal), then work through the rest.
Section ordering:
If the document structure is clear: Ask which section they'd like to start with.
Suggest starting with whichever section has the most unknowns. For decision docs, that's usually the core proposal. For specs, it's typically the technical approach. Summary sections are best left for last.
If user doesn't know what sections they need: Based on the type of document and template, suggest 3-5 sections appropriate for the doc type.
Ask if this structure works, or if they want to adjust it.
Once structure is agreed:
Create the initial document structure with placeholder text for all sections.
If access to artifacts is available: Use create_file to create an artifact. This gives both Claude and the user a scaffold to work from.
Inform them that the initial structure with placeholders for all sections will be created.
Create artifact with all section headers and brief placeholder text like "[To be written]" or "[Content here]".
Provide the scaffold link and indicate it's time to fill in each section.
If no access to artifacts: Create a markdown file in the working directory. Name it appropriately (e.g., decision-doc.md, technical-spec.md).
Inform them that the initial structure with placeholders for all sections will be created.
Create file with all section headers and placeholder text.
Confirm the filename has been created and indicate it's time to fill in each section.
For each section:
Announce work will begin on the [SECTION NAME] section. Ask 5-10 clarifying questions about what should be included:
Generate 5-10 specific questions based on context and section purpose.
Inform them they can answer in shorthand or just indicate what's important to cover.
For the [SECTION NAME] section, brainstorm [5-20] things that might be included, depending on the section's complexity. Look for:
Generate 5-20 numbered options based on section complexity. At the end, offer to brainstorm more if they want additional options.
Ask which points should be kept, removed, or combined. Request brief justifications to help learn priorities for the next sections.
Provide examples:
If user gives freeform feedback (e.g., "looks good" or "I like most of it but...") instead of numbered selections, extract their preferences and proceed. Parse what they want kept/removed/changed and apply it.
Based on what they've selected, ask if there's anything important missing for the [SECTION NAME] section.
Use str_replace to replace the placeholder text for this section with the actual drafted content.
Announce the [SECTION NAME] section will be drafted now based on what they've selected.
If using artifacts: After drafting, provide a link to the artifact.
Ask them to read through it and indicate what to change. Note that being specific helps learning for the next sections.
If using a file (no artifacts): After drafting, confirm completion.
Inform them the [SECTION NAME] section has been drafted in [filename]. Ask them to read through it and indicate what to change. Note that being specific helps learning for the next sections.
Key instruction for user (include when drafting the first section): Provide a note: Instead of editing the doc directly, ask them to indicate what to change. This helps learning of their style for future sections. For example: "Remove the X bullet - already covered by Y" or "Make the third paragraph more concise".
As user provides feedback:
str_replace to make edits (never reprint the whole doc)Continue iterating until user is satisfied with the section.
After 3 consecutive iterations with no substantial changes, ask if anything can be removed without losing important information.
When section is done, confirm [SECTION NAME] is complete. Ask if ready to move to the next section.
Repeat for all sections.
As approaching completion (80%+ of sections done), announce intention to re-read the entire document and check for:
Read entire document and provide feedback.
When all sections are drafted and refined: Announce all sections are drafted. Indicate intention to review the complete document one more time.
Review for overall coherence, flow, completeness.
Provide any final suggestions.
Ask if ready to move to Reader Testing, or if they want to refine anything else.
Goal: Test the document with a fresh Claude (no context bleed) to verify it works for readers.
Instructions to user: Explain that testing will now occur to see if the document actually works for readers. This catches blind spots - things that make sense to the authors but might confuse others.
If access to sub-agents is available (e.g., in Claude Code):
Perform the testing directly without user involvement.
Announce intention to predict what questions readers might ask when trying to discover this document.
Generate 5-10 questions that readers would realistically ask.
Announce that these questions will be tested with a fresh Claude instance (no context from this conversation).
For each question, invoke a sub-agent with just the document content and the question.
Summarize what Reader Claude got right/wrong for each question.
Announce additional checks will be performed.
Invoke sub-agent to check for ambiguity, false assumptions, contradictions.
Summarize any issues found.
If issues found: Report that Reader Claude struggled with specific issues.
List the specific issues.
Indicate intention to fix these gaps.
Loop back to refinement for problematic sections.
If no access to sub-agents (e.g., claude.ai web interface):
The user will need to do the testing manually.
Ask what questions people might ask when trying to discover this document. What would they type into Claude.ai?
Generate 5-10 questions that readers would realistically ask.
Provide testing instructions:
For each question, instruct Reader Claude to provide:
Check if Reader Claude gives correct answers or misinterprets anything.
Also ask Reader Claude:
Ask what Reader Claude got wrong or struggled with. Indicate intention to fix those gaps.
Loop back to refinement for any problematic sections.
When Reader Claude consistently answers questions correctly and doesn't surface new gaps or ambiguities, the doc is ready.
When Reader Testing passes: Announce the doc has passed Reader Claude testing. Before completion:
Ask if they want one more review, or if the work is done.
If user wants final review, provide it. Otherwise: Announce document completion. Provide a few final tips:
Tone:
Handling Deviations:
Context Management:
Artifact Management:
create_file for drafting full sectionsstr_replace for all editsQuality over Speed:
Weekly Installs
13.6K
Repository
GitHub Stars
90.8K
First Seen
Jan 20, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode10.4K
gemini-cli9.9K
codex9.7K
claude-code9.6K
cursor8.8K
github-copilot8.5K
97,400 周安装