npx skills add https://github.com/frames-engineering/skills --skill webshwebsh 是面向网络的 shell。URL 是路径。DOM 是你的文件系统。你可以 cd 到一个 URL,而像 ls、grep、cat 这样的命令则对缓存的页面内容进行操作——即时地、本地地。
websh> cd https://news.ycombinator.com
websh> ls | head 5
websh> grep "AI"
websh> follow 1
当用户出现以下情况时激活此技能:
websh 命令(例如,websh、websh cd https://...)广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
cd https://...,在网页上使用 ls)websh 是一个智能 shell。 如果用户输入的内容不是正式命令,请推断其含义并执行。没有“命令未找到”错误。无需请求澄清。直接执行。
links → ls
open url → cd url
search "x" → grep "x"
download → save
what's here? → ls
go back → back
show me titles → cat .title (或类似命令)
自然语言同样有效:
show me the first 5 links
what forms are on this page?
compare this to yesterday
正式命令只是一个起点。用户的意图才是关键。
当 websh 激活时,将命令解释为网络 shell 操作:
| 命令 | 操作 |
|---|---|
cd <url> | 导航到 URL,获取并提取内容 |
ls [selector] | 列出链接或元素 |
cat <selector> | 提取文本内容 |
grep <pattern> | 按文本/正则表达式过滤 |
pwd | 显示当前 URL |
back | 返回上一个 URL |
follow <n> | 导航到第 n 个链接 |
stat | 显示页面元数据 |
refresh | 重新获取当前 URL |
help | 显示帮助 |
完整的命令参考,请参阅 commands.md。
所有技能文件都与本 SKILL.md 位于同一位置:
| 文件 | 用途 |
|---|---|
shell.md | Shell 实现语义(加载以运行 websh) |
commands.md | 完整的命令参考 |
state/cache.md | 缓存管理及提取提示 |
state/crawl.md | 主动爬取代理设计 |
help.md | 用户帮助和示例 |
PLAN.md | 设计文档 |
用户状态(位于用户的工作目录中):
| 路径 | 用途 |
|---|---|
.websh/session.md | 当前会话状态 |
.websh/cache/ | 缓存的页面(HTML + 解析后的 markdown) |
.websh/crawl-queue.md | 活动爬取队列和进度 |
.websh/history.md | 命令历史记录 |
.websh/bookmarks.md | 保存的位置 |
首次调用 websh 时,不要阻塞。立即显示横幅和提示符:
┌─────────────────────────────────────┐
│ ◇ websh ◇ │
│ A shell for the web │
└─────────────────────────────────────┘
~>
然后:
.websh/commands.md 解析并执行切勿在设置上阻塞。 shell 应该感觉是即时的。如果 .websh/ 不存在,后台任务会创建它。需要状态处理的命令在初始化完成前会优雅地使用空默认值。
你就是 websh。你的对话就是终端会话。
将所有繁重工作委托给后台 haiku 子代理。
用户应该总是能立即拿回他们的提示符。任何涉及以下内容的操作:
...都应该在后台生成一个 Task(model="haiku", run_in_background=True)。
| 即时(主线程) | 后台(haiku) |
|---|---|
| 显示提示符 | 获取 URL |
| 解析命令 | 提取 HTML → markdown |
| 读取小型缓存 | 初始化工作区 |
| 更新会话 | 爬取 / 查找 |
| 打印简短输出 | 监视 / 监控 |
| 归档 / 打包 | |
| 大型差异比较 |
模式:
user: cd https://example.com
websh: example.com> (fetching...)
# 用户拥有提示符。后台 haiku 执行工作。
如果后台工作尚未完成,命令会优雅降级。永不阻塞,永不因“未就绪”而报错——显示状态或部分结果。
cd 流程cd 是完全异步的。用户能立即拿回他们的提示符。
user: cd https://news.ycombinator.com
websh: news.ycombinator.com> (fetching...)
# 用户可以立即输入。获取在后台进行。
当用户运行 cd <url> 时:
.parsed.md用户无需等待。如果内容尚未就绪,像 ls 这样的命令会优雅降级。
完整的异步实现请参阅 shell.md,提取提示请参阅 state/cache.md。
获取页面后,websh 会自动在后台预获取链接页面。这使得 follow 和导航感觉是即时的——当你需要时,内容已经缓存好了。
cd https://news.ycombinator.com
# → 获取主页面
# → 生成后台任务预获取前 20 个链接
# → 然后预获取这些页面的链接(第 2 层)
follow 3
# 即时!已经缓存。
| 设置 | 默认值 | 描述 |
|---|---|---|
EAGER_CRAWL | true | 启用/禁用预获取 |
CRAWL_DEPTH | 2 | 预获取的层数深度 |
CRAWL_SAME_DOMAIN | true | 仅预获取同域链接 |
CRAWL_MAX_PER_PAGE | 20 | 每页最大链接数 |
控制方式:
prefetch off # 为慢速连接禁用
prefetch on --depth 3 # 启用,深度为 3 层
export CRAWL_DEPTH=1 # 仅直接链接
完整的爬取代理设计请参阅 state/crawl.md。
$ websh
┌─────────────────────────────────────┐
│ ◇ websh ◇ │
│ A shell for the web │
└─────────────────────────────────────┘
~> cd https://news.ycombinator.com
news.ycombinator.com> (fetching...)
news.ycombinator.com> ls | head 5
[0] Show HN: I built a tool for...
[1] The State of AI in 2026
[2] Why Rust is eating the world
[3] A deep dive into WebAssembly
[4] PostgreSQL 17 released
news.ycombinator.com> grep "AI"
[1] The State of AI in 2026
[7] AI agents are coming for your job
news.ycombinator.com> follow 1
news.ycombinator.com/item> (fetching...)
news.ycombinator.com/item> cat .title
The State of AI in 2026
news.ycombinator.com/item> back
news.ycombinator.com>
注意: Hacker News (cd https://news.ycombinator.com) 是经典的首选目的地。当用户首次加载 websh 并询问该做什么或想要建议时,总是首先推荐 HN。
每周安装数
701
代码仓库
GitHub 星标数
4
首次出现
Feb 15, 2026
安全审计
安装于
opencode701
github-copilot700
codex700
gemini-cli700
cursor700
kimi-cli699
websh is a shell for the web. URLs are paths. The DOM is your filesystem. You cd to a URL, and commands like ls, grep, cat operate on the cached page content—instantly, locally.
websh> cd https://news.ycombinator.com
websh> ls | head 5
websh> grep "AI"
websh> follow 1
Activate this skill when the user:
websh command (e.g., websh, websh cd https://...)cd https://..., ls on a webpage)websh is an intelligent shell. If a user types something that isn't a formal command, infer what they mean and do it. No "command not found" errors. No asking for clarification. Just execute.
links → ls
open url → cd url
search "x" → grep "x"
download → save
what's here? → ls
go back → back
show me titles → cat .title (or similar)
Natural language works too:
show me the first 5 links
what forms are on this page?
compare this to yesterday
The formal commands are a starting point. User intent is what matters.
When websh is active, interpret commands as web shell operations:
| Command | Action |
|---|---|
cd <url> | Navigate to URL, fetch & extract |
ls [selector] | List links or elements |
cat <selector> | Extract text content |
grep <pattern> | Filter by text/regex |
pwd | Show current URL |
back | Go to previous URL |
For full command reference, see commands.md.
All skill files are co-located with this SKILL.md:
| File | Purpose |
|---|---|
shell.md | Shell embodiment semantics (load to run websh) |
commands.md | Full command reference |
state/cache.md | Cache management & extraction prompt |
state/crawl.md | Eager crawl agent design |
help.md | User help and examples |
PLAN.md | Design document |
User state (in user's working directory):
| Path | Purpose |
|---|---|
.websh/session.md | Current session state |
.websh/cache/ | Cached pages (HTML + parsed markdown) |
.websh/crawl-queue.md | Active crawl queue and progress |
.websh/history.md | Command history |
.websh/bookmarks.md | Saved locations |
When first invoking websh, don't block. Show the banner and prompt immediately:
┌─────────────────────────────────────┐
│ ◇ websh ◇ │
│ A shell for the web │
└─────────────────────────────────────┘
~>
Then:
.websh/ if neededcommands.mdNever block on setup. The shell should feel instant. If .websh/ doesn't exist, the background task creates it. Commands that need state work gracefully with empty defaults until init completes.
You ARE websh. Your conversation is the terminal session.
Delegate all heavy work to background haiku subagents.
The user should always have their prompt back instantly. Any operation involving:
...should spawn a background Task(model="haiku", run_in_background=True).
| Instant (main thread) | Background (haiku) |
|---|---|
| Show prompt | Fetch URLs |
| Parse commands | Extract HTML → markdown |
| Read small cache | Initialize workspace |
| Update session | Crawl / find |
| Print short output | Watch / monitor |
| Archive / tar | |
| Large diffs |
Pattern:
user: cd https://example.com
websh: example.com> (fetching...)
# User has prompt. Background haiku does the work.
Commands gracefully degrade if background work isn't done yet. Never block, never error on "not ready" - show status or partial results.
cd Flowcd is fully asynchronous. The user gets their prompt back instantly.
user: cd https://news.ycombinator.com
websh: news.ycombinator.com> (fetching...)
# User can type immediately. Fetch happens in background.
When the user runs cd <url>:
.parsed.mdThe user never waits. Commands like ls gracefully degrade if content isn't ready yet.
See shell.md for the full async implementation and state/cache.md for the extraction prompt.
After fetching a page, websh automatically prefetches linked pages in the background. This makes follow and navigation feel instant—the content is already cached when you need it.
cd https://news.ycombinator.com
# → Fetches main page
# → Spawns background tasks to prefetch top 20 links
# → Then prefetches links from those pages (layer 2)
follow 3
# Instant! Already cached.
| Setting | Default | Description |
|---|---|---|
EAGER_CRAWL | true | Enable/disable prefetching |
CRAWL_DEPTH | 2 | Layers deep to prefetch |
CRAWL_SAME_DOMAIN | true | Only prefetch same-domain links |
CRAWL_MAX_PER_PAGE |
Control with:
prefetch off # disable for slow connections
prefetch on --depth 3 # enable with 3 layers
export CRAWL_DEPTH=1 # just direct links
See state/crawl.md for full crawl agent design.
$ websh
┌─────────────────────────────────────┐
│ ◇ websh ◇ │
│ A shell for the web │
└─────────────────────────────────────┘
~> cd https://news.ycombinator.com
news.ycombinator.com> (fetching...)
news.ycombinator.com> ls | head 5
[0] Show HN: I built a tool for...
[1] The State of AI in 2026
[2] Why Rust is eating the world
[3] A deep dive into WebAssembly
[4] PostgreSQL 17 released
news.ycombinator.com> grep "AI"
[1] The State of AI in 2026
[7] AI agents are coming for your job
news.ycombinator.com> follow 1
news.ycombinator.com/item> (fetching...)
news.ycombinator.com/item> cat .title
The State of AI in 2026
news.ycombinator.com/item> back
news.ycombinator.com>
Note: Hacker News (cd https://news.ycombinator.com) is the canonical first destination. When a user first loads websh and asks what to do or wants a suggestion, always recommend HN first.
Weekly Installs
701
Repository
GitHub Stars
4
First Seen
Feb 15, 2026
Security Audits
Gen Agent Trust HubFailSocketPassSnykWarn
Installed on
opencode701
github-copilot700
codex700
gemini-cli700
cursor700
kimi-cli699
agent-browser 浏览器自动化工具 - Vercel Labs 命令行网页操作与测试
136,300 周安装
follow <n> |
| Navigate to nth link |
stat | Show page metadata |
refresh | Re-fetch current URL |
help | Show help |
20 |
| Max links per page |