dial-your-context by sanity-io/agent-context
npx skills add https://github.com/sanity-io/agent-context --skill dial-your-context帮助用户为其 Sanity Agent Context MCP 创建“指令”字段内容。目标是获得一组简洁的纯增量信息——仅包含智能体无法从自动生成的模式中推断出的信息。
Agent Context MCP 已经为智能体提供了:
您正在制作的“指令”字段内容将作为 ## 自定义指令 部分,注入到 MCP 指令块中的 ## 响应风格 和 ## 工具 之间。它应仅包含模式中不明显的信息:
body 实际上是 slug,hero 是对 mediaAsset 的引用)product → productFeature 并在特征的 字段上匹配——模式显示了每一步,但没有显示完整路径”)广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
idproduct 类型有一个 features 数组,但它总是空的——请改用 support-product”)subtitle 字段未使用——忽略它”)切勿重复模式已经清晰传达的内容。
您需要以下条件之一才能运行此会话:
路径 A — 写入权限(推荐): 一个 Sanity 写入令牌或通用的 Sanity MCP(OAuth)。这允许您创建一个草稿上下文文档,在会话期间向其写入指令和过滤器,并在完成后将其提升到生产环境。在您准备好之前,生产环境不会被触及。
路径 B — 仅限 URL 参数: 在 MCP 端点上使用 ?instructions= 和 ?groqFilter= URL 查询参数来测试所有内容。最后,提供最终内容供用户在 Sanity Studio 中手动输入。适用于基础 URL 和文档 URL。
两种路径都是安全的——在会话期间都不会修改生产环境中的智能体。
目标: 建立 MCP 访问权限,设置安全的工作环境。
连接到用户的 Sanity Agent Context MCP。如果尚不知晓,从用户处获取项目 ID 和数据集。仅当用户已有现有的 Agent Context 文档时才需要 slug。
设置您的工作环境:
路径 A(写入权限): 通过复制现有文档(如果有)到一个新的 slug(如 tuning-draft)来创建一个新的草稿上下文文档。所有探索和迭代都针对此草稿进行——生产环境中的智能体不受影响。
路径 B(无写入权限): 在整个会话期间使用 URL 查询参数:
?instructions="" —— 强制清空状态(忽略现有指令)?groqFilter=<expression> —— 应用过滤器而不写入上下文文档检查上下文文档是否已有指令内容:
通过运行一个简单的 GROQ 查询(如 *[0..2]._type)来验证您可以查询数据集,以确认访问权限。
输出: 确认的 MCP 访问权限,已建立安全的工作环境(草稿文档或 URL 参数),任何现有指令已呈现给用户。
目标: 通过对话理解数据集,而不仅仅是探索。
检索模式(MCP 提供此功能)。向用户清晰地列出文档类型:
以下是您数据集中的文档类型:
article(14 个字段)author(8 个字段)category(5 个字段)- ...
其中哪些是您的智能体需要处理的?
这是一个对话,而不是独白。询问用户:
建议一个过滤器。 MCP 支持 groqFilter —— 一个完整的 GROQ 表达式,用于限定智能体可以访问的文档范围。这具有很高的杠杆作用——它能显著减少噪音,并防止智能体查询不相关的类型。
过滤器是一个 GROQ 表达式字符串,而不仅仅是类型列表。这意味着您可以精确地划定所需的文档集:
_type in ["product", "support-article", "productFeature"]!(_id in path("drafts.**")) && _type in ["product", "article"]_type in ["product", "article"] && lang == "en-us"_type in ["product", "article"] && !(_id in path("drafts.**")) && defined(title)根据对话,提出一个过滤器:
根据您告诉我的信息,我建议使用这个过滤器:
_type in ["article", "author", "category", "tag"]这意味着智能体将看不到
siteSettings、redirect、migration等类型。这听起来对吗?
立即应用过滤器。 一旦用户同意:
groqFilter 字段写入草稿上下文文档?groqFilter=<expression>从此刻起的所有探索都应使用商定的过滤器。
输出: 对哪些类型重要、已知的怪癖、关系以及一个活动过滤器的共同理解。探索生产环境智能体不会看到的类型是没有意义的。
目标: 获取生产环境智能体将被问到的具体示例。
询问用户:
人们会向使用此上下文的智能体提出什么问题?给我 5-20 个示例——越真实越好。
示例可能是:
这些问题驱动步骤 4 中的探索。它们告诉您哪些查询模式实际上很重要。
对于简单的数据集,5 个问题即可。对于复杂的数据集,争取 15-20 个。
输出: 预期问题的编号列表。
目标: 使用 MCP 回答每个预期问题,跟踪哪些有效,哪些无效。
步骤 4-6 是迭代的,而不是顺序的。 随着进展,与用户验证发现。不要探索 15 个问题,草拟所有内容,然后发现一半的声明站不住脚。
逐个(或按逻辑分组)处理预期问题。对于每个问题:
关键:不要假设。 如果查询返回空结果或意外数据:
在一个简单的表格中跟踪您的发现:
---|---|---|---|---
1 | “最近的文章” | *[_type == "article"] | order(publishedAt desc)[0..4] | ✅ 5 个结果 | 仅凭模式即可工作
2 | “按作者筛选文章” | *[_type == "article" && references(authorId)] | ⚠️ 空 | 作者通过 contributors[].person 链接,而非直接引用
3 | “仅限已发布” | *[_type == "article" && status == "published"] | ❌ 无 status 字段 | 用户确认:改用 !(_id in path("drafts.**"))
适应规模:
输出: 一个发现表格,包含每个预期问题的已验证结果。
目标: 将发现提炼成简洁、客观的指令内容。
回顾步骤 4 中的发现表格。仅包含标记为 ⚠️ 或 ❌ 的项目——那些需要非显而易见模式或使用明显方法失败的事项。
将指令写成简短、陈述性的语句,按类别组织:
### 规则
- 始终过滤草稿:使用 `!(_id in path("drafts.**"))` —— 没有 `status` 字段
- 对于本地化内容,除非用户另有指定,否则始终包含 `[_lang == "en"]`
### 模式说明
- `article` 上的 `contributors` 是一个对象数组,其中 `person` 引用指向 `author` —— 不是直接作者引用
- `article` 上的 `hero` 是对 `mediaAsset` 的引用,而不是图像字段
- `page` 上的 `body` 是 Portable Text 数组,不是字符串 —— 使用 `pt::text(body)` 进行纯文本搜索
### 查询模式
- 按作者筛选文章:`*[_type == "article" && contributors[].person._ref == $authorId]`
- 按日期筛选已发布文章:`*[_type == "article" && !(_id in path("drafts.**"))] | order(publishedAt desc)`
### 已知限制
- `article` 上的 `subtitle` 字段未使用 —— 忽略它
- `relatedArticles` 是手动策划的,对于旧内容通常为空
保持紧凑。 每一行都应通过这个测试:“仅凭模式的智能体会在这方面出错吗?” 如果不确定,就测试一下——尝试用 ?instructions="" 回答 2-3 个问题,看看模型自己会犯什么错误。这就是您需要在此处包含内容的经验基线。如果不是,就删掉它。
不要包含:
输出: 一个指令块草稿,通常为 10-40 行,取决于数据集的复杂性。
目标: 确保草稿中的每一行都有证据支持。
逐行检查草稿指令。对于每个声明,向用户展示:
示例:
声明: “始终使用
!(_id in path("drafts.**"))过滤草稿 —— 没有 status 字段”证据:
*[_type == "article" && defined(status)][0..2]→ 0 个结果。*[_type == "article" && _id in path("drafts.**")][0..2]→ 找到 3 个草稿文档。**这正确吗?”
如果用户更正了一个声明,立即更新草稿。
如果用户添加了新信息(“哦,你还应该知道……”),将其添加到草稿中并以相同方式验证。
输出: 一个经过验证的指令块,其中每个声明都已得到用户确认。
目标: 安全地将指令和过滤器部署到生产环境。
向用户展示最终的指令内容和过滤器,进行最后一次审查:
这是最终配置:
过滤器(GROQ 表达式):
_type in ["article", "author", "category", "tag"]指令: [最终指令块]
准备好部署了吗?
路径 A(写入权限):
instructions 和 groqFilter 字段写入草稿上下文文档## 自定义指令 中instructions 和 groqFilter 字段以匹配,要么更新生产环境智能体的 MCP URL 以指向新的 slug路径 B(无写入权限):
https://api.sanity.io/vX/agent-context/{project}/{dataset}/{slug}?instructions=<URL-encoded>&groqFilter=<URL-encoded>
部署后,验证: 查询生产环境 MCP 端点,确认指令和过滤器已激活。
输出: 指令和过滤器在生产环境中生效,已验证工作正常。
此工作流程可适应任何规模的数据集:
小型数据集(3-5 个类型,5 个问题):
大型数据集(50+ 个类型,20 个问题):
过滤器对于大型数据集更为重要。 一个包含 50 个类型但智能体仅需要其中 8 个类型的数据集,将极大地受益于过滤器。
在整个会话过程中,保持以下思维模型:
此清单是您的进度跟踪器。定期与用户分享,以便他们了解您处于流程的哪个阶段。
每周安装数
114
仓库
GitHub 星标数
4
首次出现
2026年2月26日
安全审计
安装于
amp113
gemini-cli113
codex113
kimi-cli113
cursor113
opencode113
Help a user create the Instructions field content for their Sanity Agent Context MCP. The goal is a concise set of pure deltas — only information the agent can't figure out from the auto-generated schema.
The Agent Context MCP already provides the agent with:
The Instructions field you're crafting gets injected as a ## Custom instructions section between ## Response style and ## Tools in the MCP's instructions blob. It should contain only what the schema doesn't make obvious :
body is actually a slug, hero is a reference to mediaAsset)product → productFeature and match on the feature's id field — the schema shows each hop but not the full path")product type has a features array but it's always empty — use support-product instead")subtitle field is unused — ignore it")Never duplicate what the schema already communicates clearly.
You need one of these to run this session:
Path A — Write access (recommended): A Sanity write token or the general Sanity MCP (OAuth). This lets you create a draft context doc, write instructions + filter to it during the session, and promote it to production when done. Production is never touched until you're ready.
Path B — URL params only: Use ?instructions= and ?groqFilter= URL query params on the MCP endpoint to test everything. At the end, provide the final content for the user to enter manually in Sanity Studio. Works with both base and document URLs.
Both paths are safe — neither modifies the production agent during the session.
Goal: Establish MCP access, set up a safe working environment.
Connect to the user's Sanity Agent Context MCP. Get the project ID and dataset from the user if not already known. The slug is only needed if they have an existing Agent Context document.
Set up your working environment:
Path A (write access): Create a new draft context doc by copying the existing one (if any) to a new slug like tuning-draft. All exploration and iteration happens against this draft — the production agent is untouched.
Path B (no write access): Use URL query params throughout the session:
?instructions="" — forces a blank slate (ignores existing instructions)?groqFilter=<expression> — applies a filter without writing to the context docCheck if the context document already has instructions content:
Verify you can query the dataset by running a simple GROQ query like *[0..2]._type to confirm access.
Output: Confirmed MCP access, safe working environment established (draft doc or URL params), any existing instructions surfaced to user.
Goal: Understand the dataset through conversation, not just exploration.
Retrieve the schema (the MCP provides this). Present the document types to the user in a clear list:
Here are the document types in your dataset:
article(14 fields)author(8 fields)category(5 fields)- ...
Which of these are the ones your agent will need to work with?
This is a conversation , not a monologue. Ask the user:
Suggest a filter. The MCP supports a groqFilter — a full GROQ expression that scopes which documents the agent can access. This is high-leverage — it reduces noise significantly and prevents the agent from querying irrelevant types.
The filter is a GROQ expression string, not just a type list. This means you can carve out exactly the document set you want:
_type in ["product", "support-article", "productFeature"]!(_id in path("drafts.**")) && _type in ["product", "article"]_type in ["product", "article"] && lang == "en-us"_type in ["product", "article"] && !(_id in path("drafts.**")) && defined(title)Based on the conversation, propose a filter:
Based on what you've told me, I'd suggest this filter:
_type in ["article", "author", "category", "tag"]This means the agent won't see
siteSettings,redirect,migration, etc. Does that sound right?
Apply the filter immediately. Once the user agrees:
groqFilter field to the draft context doc?groqFilter=<expression> to all subsequent MCP callsAll exploration from this point forward should use the agreed filter.
Output: A shared understanding of which types matter, known quirks, relationships, and an active filter. There's no point exploring types the production agent won't see.
Goal: Get concrete examples of what the production agent will be asked.
Ask the user:
What questions will people ask the agent that uses this context? Give me 5-20 examples — the more realistic, the better.
Examples might be:
These questions drive the exploration in Step 4. They tell you what query patterns actually matter.
For simple datasets, 5 questions is fine. For complex ones, push for 15-20.
Output: A numbered list of expected questions.
Goal: Answer each expected question using the MCP, track what works and what doesn't.
Steps 4–6 are iterative, not sequential. Verify findings with the user as you go. Don't explore 15 questions, draft everything, then discover half your claims don't hold up.
Work through the expected questions one by one (or in logical groups). For each question:
Critical: Do not assume. If a query returns empty results or unexpected data:
Track your findings in a simple table:
---|---|---|---|---
1 | "Recent articles" | *[_type == "article"] | order(publishedAt desc)[0..4] | ✅ 5 results | Works with schema alone
2 | "Articles by author" | *[_type == "article" && references(authorId)] | ⚠️ Empty | Authors linked via contributors[].person, not direct ref
3 | "Published only" | *[_type == "article" && status == "published"] | ❌ No status field | User confirms: use !(_id in path("drafts.**")) instead
Adapt to scale:
Output: A findings table with verified results for each expected question.
Goal: Distill findings into concise, factual Instructions content.
Review the findings table from Step 4. Include only items marked ⚠️ or ❌ — things that required non-obvious patterns or failed with the obvious approach.
Write the Instructions as short, declarative statements organized by category:
### Rules
- Always filter drafts: use `!(_id in path("drafts.**"))` — there is no `status` field
- Always include `[_lang == "en"]` for localized content unless user specifies otherwise
### Schema notes
- `contributors` on `article` is an array of objects with a `person` reference to `author` — not a direct author reference
- `hero` on `article` is a reference to `mediaAsset`, not an image field
- `body` on `page` is a Portable Text array, not a string — use `pt::text(body)` for plain text search
### Query patterns
- Articles by author: `*[_type == "article" && contributors[].person._ref == $authorId]`
- Published articles by date: `*[_type == "article" && !(_id in path("drafts.**"))] | order(publishedAt desc)`
### Known limitations
- `subtitle` field on `article` is unused — ignore it
- `relatedArticles` is manually curated and often empty for older content
Keep it tight. Each line should pass this test: "Would an agent with the schema alone get this wrong?" If you're unsure, test it — try answering 2-3 questions with ?instructions="" and see what the model gets wrong on its own. That's your empirical baseline for what actually needs to be here. If no, cut it.
Do not include:
Output: A draft Instructions block, typically 10-40 lines depending on dataset complexity.
Goal: Ensure every line in the draft is backed by evidence.
Go through the draft Instructions line by line. For each claim, show the user:
Example:
Claim: "Always filter drafts using
!(_id in path("drafts.**"))— there is no status field"Evidence:
*[_type == "article" && defined(status)][0..2]→ 0 results.*[_type == "article" && _id in path("drafts.**")][0..2]→ 3 draft documents found.Is this correct?
If the user corrects a claim, update the draft immediately.
If the user adds new information ("oh, and you should also know that..."), add it to the draft and verify it the same way.
Output: A verified Instructions block where every claim has been confirmed by the user.
Goal: Get the Instructions and filter into production safely.
Present the final Instructions content and filter to the user for one last review:
Here's the final configuration:
Filter (GROQ expression):
_type in ["article", "author", "category", "tag"]Instructions: [final instructions block]
Ready to deploy?
Path A (write access):
instructions and groqFilter fields to the draft context doc## Custom instructionsinstructions and groqFilter fields to match, or update the production agent's MCP URL to point to the new slugPath B (no write access):
Provide the final MCP URL with all params baked in:
https://api.sanity.io/vX/agent-context/{project}/{dataset}/{slug}?instructions=<URL-encoded>&groqFilter=<URL-encoded>
Also provide the raw content separately for the user to paste into their Agent Context document in Sanity Studio:
After deployment, verify: Query the production MCP endpoint and confirm the instructions and filter are active.
Output: Instructions and filter live in production, verified working.
This workflow scales to any dataset size:
Small dataset (3-5 types, 5 questions):
Large dataset (50+ types, 20 questions):
The filter matters more for large datasets. A 50-type dataset where the agent only needs 8 types benefits enormously from a filter.
Throughout the session, maintain a mental model of:
- [ ] MCP access verified
- [ ] Working environment set up (draft context doc or URL params)
- [ ] Existing instructions reviewed (if any)
- [ ] Schema discussed with user
- [ ] Filter agreed and applied
- [ ] Expected questions collected
- [ ] Questions explored and findings tracked
- [ ] Draft instructions written
- [ ] Each claim verified with evidence
- [ ] Instructions deployed to production
- [ ] Production deployment verified
This checklist is your progress tracker. Share it with the user periodically so they know where you are in the process.
Weekly Installs
114
Repository
GitHub Stars
4
First Seen
Feb 26, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
amp113
gemini-cli113
codex113
kimi-cli113
cursor113
opencode113
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
116,600 周安装
Defuddle CLI:网页内容提取工具,一键移除广告导航,节省AI令牌使用量
7,900 周安装
MkDocs AI 翻译器 | GitHub Copilot 文档本地化工具 | 多语言技术文档自动化翻译
7,700 周安装
GitHub Issue自动创建工具 - 根据实施方案智能生成任务卡片 | GitHub Copilot扩展
7,700 周安装
OpenAPI 规范自动生成应用程序代码工具 - 快速构建完整可运行项目
7,800 周安装
GitHub Copilot Skill:my-pull-requests - 自动化管理您的拉取请求,提升代码审查效率
7,800 周安装
MCP代理部署管理指南:Microsoft 365管理中心治理声明式AI代理
7,700 周安装