filesystem-context by guanyang/antigravity-skills
npx skills add https://github.com/guanyang/antigravity-skills --skill filesystem-context将文件系统作为智能体上下文的主要溢出层,因为上下文窗口有限,而任务通常需要比单个窗口容纳更多信息。文件让智能体能够通过单一接口存储、检索和更新几乎无限量的上下文。
优先采用动态上下文发现——按需拉取相关上下文——而非静态包含,因为静态上下文无论相关与否都会消耗令牌,并挤占任务特定信息的空间。
在以下情况激活此技能:
根据以下四种模式诊断上下文故障,因为每种模式都需要不同的文件系统解决方案:
将文件系统用作解决所有四种问题的持久层:一次写入,持久存储,选择性检索。
将静态上下文(系统指令、工具定义、关键规则)视为昂贵的资源——它每轮都会消耗令牌,无论是否相关。随着智能体能力积累,静态上下文会增长并挤占动态信息。
转而使用动态上下文发现:仅包含最少的静态指针(名称、单行描述、文件路径),并在相关时使用搜索工具加载完整内容。这更节省令牌,并且通常通过减少窗口中的矛盾或无关信息来提高响应质量。
接受这种权衡:动态发现要求模型能够识别何时需要更多上下文。当前的前沿模型能很好地处理这一点,但能力较弱的模型可能无法触发加载。如有疑问,倾向于静态包含关键的安全性或正确性约束。
将大型工具输出重定向到文件,而不是直接返回到上下文,因为一次网络搜索或数据库查询可能会将数千个令牌转储到消息历史中,并在整个对话期间持续存在。
将输出写入草稿文件,提取紧凑摘要,并返回文件引用。然后智能体使用针对性检索(grep 模式、按行范围读取)来仅访问所需内容。
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
def handle_tool_output(output: str, threshold: int = 2000) -> str:
if len(output) < threshold:
return output
file_path = f"scratch/{tool_name}_{timestamp}.txt"
write_file(file_path, output)
key_summary = extract_summary(output, max_tokens=200)
return f"[Output written to {file_path}. Summary: {key_summary}]"
使用 grep 搜索已卸载的文件,并使用带行范围的 read_file 来检索目标部分,因为这保留了完整的输出供后续参考,同时在活动上下文中仅保留约 100 个令牌。
将计划写入文件系统,因为当计划脱离注意力或被摘要掉时,长期任务会失去连贯性。智能体可以在任何时刻重新读取其计划,恢复对目标和进度的认知。
以结构化格式存储计划,使其既易于人类阅读又易于机器解析:
# scratch/current_plan.yaml
objective: "Refactor authentication module"
status: in_progress
steps:
- id: 1
description: "Audit current auth endpoints"
status: completed
- id: 2
description: "Design new token validation flow"
status: in_progress
- id: 3
description: "Implement and test changes"
status: pending
在每轮开始时或任何上下文刷新后重新读取计划以重新定位,因为这相当于“通过复述来操纵注意力”。
通过文件系统而非消息传递来路由子智能体的发现结果,因为多跳消息链会在每一跳通过摘要降低信息质量(“传话游戏”)。
让每个子智能体直接写入其自己的工作空间目录。协调器直接读取这些文件,保持完整的保真度:
workspace/
agents/
research_agent/
findings.md
sources.jsonl
code_agent/
changes.md
test_results.txt
coordinator/
synthesis.md
强制执行每个智能体的目录隔离,以防止写入冲突并保持每个输出工件的清晰所有权。
将技能存储为文件,并在静态上下文中仅包含技能名称和简要描述,因为将所有指令塞进系统提示会浪费令牌,并可能因相互矛盾的指导而混淆模型。
Available skills (load with read_file when relevant):
- database-optimization: Query tuning and indexing strategies
- api-design: REST/GraphQL best practices
- testing-strategies: Unit, integration, and e2e testing patterns
仅当当前任务需要时才加载完整的技能文件(例如,skills/database-optimization/SKILL.md)。这将 O(n) 的静态令牌成本转换为每任务 O(1) 的成本。
自动将终端输出持久化到文件,并使用 grep 进行选择性检索,因为来自长时间运行进程的终端输出会迅速累积,而手动复制粘贴容易出错。
terminals/
1.txt # Terminal session 1 output
2.txt # Terminal session 2 output
使用针对性的 grep 查询(grep -A 5 "error" terminals/1.txt),而不是将整个终端历史记录加载到上下文中。
让智能体将学习到的偏好和模式写入它们自己的指令文件,以便后续会话自动加载此上下文,而不是需要手动更新系统提示。
def remember_preference(key: str, value: str):
preferences_file = "agent/user_preferences.yaml"
prefs = load_yaml(preferences_file)
prefs[key] = value
write_yaml(preferences_file, prefs)
对此模式进行验证防护,因为自我修改会随时间累积不正确或矛盾的指令。将其视为实验性的——定期审查持久化的偏好。
结合 ls/list_dir、glob、grep 和带行范围的 read_file 进行上下文发现,因为模型专门针对文件系统遍历进行了训练,并且对于结构模式清晰的技术内容,这种组合通常优于语义搜索。
ls / list_dir:发现目录结构glob:查找匹配模式的文件(例如,**/*.py)grep:搜索文件内容,返回匹配行及其上下文read_file:读取特定部分,无需加载整个文件将文件系统搜索用于结构化和精确匹配查询,将语义搜索用于概念查询。结合两者以实现全面的发现。
当情况符合以下标准时应用文件系统模式,因为它们会增加 I/O 开销,只有通过节省令牌或持久化需求才能证明其合理性:
在以下情况使用:
在以下情况避免使用:
为智能体可发现性组织文件,因为智能体通过列出和读取目录名称进行导航:
project/
scratch/ # 临时工作文件
tool_outputs/ # 大型工具结果
plans/ # 活动计划和检查清单
memory/ # 持久化学习到的信息
preferences.yaml # 用户偏好
patterns.md # 学习到的模式
skills/ # 可加载的技能定义
agents/ # 子智能体工作空间
使用一致的命名约定,并在草稿文件中包含时间戳或 ID 以消除歧义。
在应用文件系统模式之前和之后,测量令牌的来源,因为在没有测量的情况下进行优化会导致浪费精力:
示例 1:工具输出卸载
Input: Web search returns 8000 tokens
Before: 8000 tokens added to message history
After:
- Write to scratch/search_results_001.txt
- Return: "[Results in scratch/search_results_001.txt. Key finding: API rate limit is 1000 req/min]"
- Agent greps file when needing specific details
Result: ~100 tokens in context, 8000 tokens accessible on demand
示例 2:动态技能加载
Input: User asks about database indexing
Static context: "database-optimization: Query tuning and indexing"
Agent action: read_file("skills/database-optimization/SKILL.md")
Result: Full skill loaded only when relevant
示例 3:聊天历史作为文件引用
Trigger: Context window limit reached, summarization required
Action:
1. Write full history to history/session_001.txt
2. Generate summary for new context window
3. Include reference: "Full history in history/session_001.txt"
Result: Agent can search history file to recover details lost in summarization
**/*)会将不相关的文件拉入上下文,浪费令牌并混淆模型。将 glob 范围限定到特定目录和扩展名。此技能连接到:
内部参考:
此集合中的相关技能:
外部资源:
创建时间 : 2026-01-07 最后更新 : 2026-03-17 作者 : Agent Skills for Context Engineering Contributors 版本 : 1.1.0
每周安装数
253
仓库
GitHub 星标数
498
首次出现时间
Jan 26, 2026
安全审计
安装于
opencode240
codex187
gemini-cli184
cursor180
github-copilot180
kimi-cli176
Use the filesystem as the primary overflow layer for agent context because context windows are limited while tasks often require more information than fits in a single window. Files let agents store, retrieve, and update an effectively unlimited amount of context through a single interface.
Prefer dynamic context discovery -- pulling relevant context on demand -- over static inclusion, because static context consumes tokens regardless of relevance and crowds out space for task-specific information.
Activate this skill when:
Diagnose context failures against these four modes, because each requires a different filesystem remedy:
Use the filesystem as the persistent layer that addresses all four: write once, store durably, retrieve selectively.
Treat static context (system instructions, tool definitions, critical rules) as expensive real estate -- it consumes tokens on every turn regardless of relevance. As agents accumulate capabilities, static context grows and crowds out dynamic information.
Use dynamic context discovery instead: include only minimal static pointers (names, one-line descriptions, file paths) and load full content with search tools when relevant. This is more token-efficient and often improves response quality by reducing contradictory or irrelevant information in the window.
Accept the trade-off: dynamic discovery requires the model to recognize when it needs more context. Current frontier models handle this well, but less capable models may fail to trigger loads. When in doubt, err toward including critical safety or correctness constraints statically.
Redirect large tool outputs to files instead of returning them directly to context, because a single web search or database query can dump thousands of tokens into message history where they persist for the entire conversation.
Write the output to a scratch file, extract a compact summary, and return a file reference. The agent then uses targeted retrieval (grep for patterns, read with line ranges) to access only what it needs.
def handle_tool_output(output: str, threshold: int = 2000) -> str:
if len(output) < threshold:
return output
file_path = f"scratch/{tool_name}_{timestamp}.txt"
write_file(file_path, output)
key_summary = extract_summary(output, max_tokens=200)
return f"[Output written to {file_path}. Summary: {key_summary}]"
Use grep to search the offloaded file and read_file with line ranges to retrieve targeted sections, because this preserves full output for later reference while keeping only ~100 tokens in the active context.
Write plans to the filesystem because long-horizon tasks lose coherence when plans fall out of attention or get summarized away. The agent re-reads its plan at any point, restoring awareness of the objective and progress.
Store plans in structured format so they are both human-readable and machine-parseable:
# scratch/current_plan.yaml
objective: "Refactor authentication module"
status: in_progress
steps:
- id: 1
description: "Audit current auth endpoints"
status: completed
- id: 2
description: "Design new token validation flow"
status: in_progress
- id: 3
description: "Implement and test changes"
status: pending
Re-read the plan at the start of each turn or after any context refresh to re-orient, because this acts as "manipulating attention through recitation."
Route sub-agent findings through the filesystem instead of message passing, because multi-hop message chains degrade information through summarization at each hop ("game of telephone").
Have each sub-agent write directly to its own workspace directory. The coordinator reads these files directly, preserving full fidelity:
workspace/
agents/
research_agent/
findings.md
sources.jsonl
code_agent/
changes.md
test_results.txt
coordinator/
synthesis.md
Enforce per-agent directory isolation to prevent write conflicts and maintain clear ownership of each output artifact.
Store skills as files and include only skill names with brief descriptions in static context, because stuffing all instructions into the system prompt wastes tokens and can confuse the model with contradictory guidance.
Available skills (load with read_file when relevant):
- database-optimization: Query tuning and indexing strategies
- api-design: REST/GraphQL best practices
- testing-strategies: Unit, integration, and e2e testing patterns
Load the full skill file (e.g., skills/database-optimization/SKILL.md) only when the current task requires it. This converts O(n) static token cost into O(1) per task.
Persist terminal output to files automatically and use grep for selective retrieval, because terminal output from long-running processes accumulates rapidly and manual copy-paste is error-prone.
terminals/
1.txt # Terminal session 1 output
2.txt # Terminal session 2 output
Query with targeted grep (grep -A 5 "error" terminals/1.txt) instead of loading entire terminal histories into context.
Have agents write learned preferences and patterns to their own instruction files so subsequent sessions load this context automatically, instead of requiring manual system prompt updates.
def remember_preference(key: str, value: str):
preferences_file = "agent/user_preferences.yaml"
prefs = load_yaml(preferences_file)
prefs[key] = value
write_yaml(preferences_file, prefs)
Guard this pattern with validation because self-modification can accumulate incorrect or contradictory instructions over time. Treat it as experimental -- review persisted preferences periodically.
Combine ls/list_dir, glob, grep, and read_file with line ranges for context discovery, because models are specifically trained on filesystem traversal and this combination often outperforms semantic search for technical content where structural patterns are clear.
ls / list_dir: Discover directory structureglob: Find files matching patterns (e.g., **/*.py)grep: Search file contents, returns matching lines with contextread_file with ranges: Read specific sections without loading entire filesUse filesystem search for structural and exact-match queries, and semantic search for conceptual queries. Combine both for comprehensive discovery.
Apply filesystem patterns when the situation matches these criteria, because they add I/O overhead that is only justified by token savings or persistence needs:
Use when:
Avoid when:
Structure files for agent discoverability, because agents navigate by listing and reading directory names:
project/
scratch/ # Temporary working files
tool_outputs/ # Large tool results
plans/ # Active plans and checklists
memory/ # Persistent learned information
preferences.yaml # User preferences
patterns.md # Learned patterns
skills/ # Loadable skill definitions
agents/ # Sub-agent workspaces
Use consistent naming conventions and include timestamps or IDs in scratch files for disambiguation.
Measure where tokens originate before and after applying filesystem patterns, because optimizing without measurement leads to wasted effort:
Example 1: Tool Output Offloading
Input: Web search returns 8000 tokens
Before: 8000 tokens added to message history
After:
- Write to scratch/search_results_001.txt
- Return: "[Results in scratch/search_results_001.txt. Key finding: API rate limit is 1000 req/min]"
- Agent greps file when needing specific details
Result: ~100 tokens in context, 8000 tokens accessible on demand
Example 2: Dynamic Skill Loading
Input: User asks about database indexing
Static context: "database-optimization: Query tuning and indexing"
Agent action: read_file("skills/database-optimization/SKILL.md")
Result: Full skill loaded only when relevant
Example 3: Chat History as File Reference
Trigger: Context window limit reached, summarization required
Action:
1. Write full history to history/session_001.txt
2. Generate summary for new context window
3. Include reference: "Full history in history/session_001.txt"
Result: Agent can search history file to recover details lost in summarization
**/*) pull irrelevant files into context, wasting tokens and confusing the model. Scope globs to specific directories and extensions.This skill connects to:
Internal reference:
Related skills in this collection:
External resources:
Created : 2026-01-07 Last Updated : 2026-03-17 Author : Agent Skills for Context Engineering Contributors Version : 1.1.0
Weekly Installs
253
Repository
GitHub Stars
498
First Seen
Jan 26, 2026
Security Audits
Gen Agent Trust HubFailSocketPassSnykWarn
Installed on
opencode240
codex187
gemini-cli184
cursor180
github-copilot180
kimi-cli176
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
106,200 周安装