npx skills add https://github.com/hyperpuncher/dotagents --skill scraplingScrapling 是一个功能强大的 Python 网络爬虫库,附带一个全面的 CLI,可直接从终端提取网站数据而无需编写代码。其主要用例是用于快速数据提取的 extract 命令组。
使用 uv 安装并包含 shell 扩展:
uv tool install "scrapling[shell]"
然后安装 fetcher 依赖项(浏览器、系统依赖项、指纹操作):
scrapling install
scrapling extract 命令组允许您无需编写任何代码即可从网站下载和提取内容。输出格式由文件扩展名决定:
.md - 将 HTML 转换为 Markdown.html - 保存原始 HTML.txt - 提取干净的文本内容# 基本网站下载为文本
scrapling extract get "https://example.com" page_content.txt
# 下载为 markdown
scrapling extract get "https://blog.example.com" article.md
# 保存原始 HTML
scrapling extract get "https://example.com" page.html
Scrapling is a powerful Python web scraping library with a comprehensive CLI for extracting data from websites directly from the terminal without writing code. The primary use case is the extract command group for quick data extraction.
Install with the shell extras using uv:
uv tool install "scrapling[shell]"
Then install fetcher dependencies (browsers, system dependencies, fingerprint manipulation):
scrapling install
The scrapling extract command group allows you to download and extract content from websites without writing any code. Output format is determined by file extension:
.md - Convert HTML to Markdown.html - Save raw HTML.txt - Extract clean text content广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 用例 | 命令 |
|---|---|
| 简单网站、博客、新闻文章 | get |
| 现代 Web 应用、动态内容(JavaScript) | fetch |
| 受保护站点、Cloudflare、反机器人 | stealthy-fetch |
| 表单提交、API | post, put, delete |
下载网站内容最常用的命令:
# 基本下载
scrapling extract get "https://news.site.com" news.md
# 使用自定义超时下载
scrapling extract get "https://example.com" content.txt --timeout 60
# 使用 CSS 选择器提取特定内容
scrapling extract get "https://blog.example.com" articles.md --css-selector "article"
# 发送带 Cookie 的请求
scrapling extract get "https://scrapling.requestcatcher.com" content.md \
--cookies "session=abc123; user=john"
# 添加用户代理
scrapling extract get "https://api.site.com" data.json \
-H "User-Agent: MyBot 1.0"
# 添加多个请求头
scrapling extract get "https://site.com" page.html \
-H "Accept: text/html" \
-H "Accept-Language: en-US"
# 带查询参数
scrapling extract get "https://api.example.com" data.json \
-p "page=1" -p "limit=10"
GET 选项:
-H, --headers TEXT HTTP 请求头 "Key: Value"(允许多个)
--cookies TEXT Cookie "name1=value1;name2=value2"
--timeout INTEGER 请求超时时间(秒)(默认:30)
--proxy TEXT 代理 URL(来自 $PROXY_URL 环境变量)
-s, --css-selector TEXT 使用 CSS 选择器提取特定内容
-p, --params TEXT 查询参数 "key=value"(多个)
--follow-redirects / --no-follow-redirects (默认:True)
--verify / --no-verify SSL 验证(默认:True)
--impersonate TEXT 模拟的浏览器(chrome, firefox)
--stealthy-headers / --no-stealthy-headers (默认:True)
# 提交表单数据
scrapling extract post "https://api.site.com/search" results.html \
--data "query=python&type=tutorial"
# 发送 JSON 数据
scrapling extract post "https://api.site.com" response.json \
--json '{"username": "test", "action": "search"}'
POST 选项:(包含 GET 的所有选项,外加)
-d, --data TEXT 表单数据 "param1=value1¶m2=value2"
-j, --json TEXT JSON 数据字符串
# 发送数据
scrapling extract put "https://api.example.com" results.html \
--data "update=info" \
--impersonate "firefox"
# 发送 JSON 数据
scrapling extract put "https://api.example.com" response.json \
--json '{"username": "test", "action": "search"}'
scrapling extract delete "https://api.example.com/resource" response.txt
# 使用模拟
scrapling extract delete "https://api.example.com/" response.txt \
--impersonate "chrome"
对于 JavaScript 密集型网站或 HTTP 请求失败时,使用基于浏览器的抓取。
适用于动态加载内容或具有轻微保护的网站:
# 等待 JavaScript 加载和网络活动完成
scrapling extract fetch "https://example.com" content.md --network-idle
# 等待特定元素出现
scrapling extract fetch "https://example.com" data.txt \
--wait-selector ".content-loaded"
# 可见浏览器模式用于调试
scrapling extract fetch "https://example.com" page.html \
--no-headless --disable-resources
# 使用已安装的 Chrome 浏览器
scrapling extract fetch "https://example.com" content.md --real-chrome
# 使用 CSS 选择器提取
scrapling extract fetch "https://example.com" articles.md \
--css-selector "article" \
--network-idle
fetch 选项:
--headless / --no-headless 以无头模式运行浏览器(默认:True)
--disable-resources 丢弃不必要的资源以提升速度
--network-idle 等待网络空闲
--timeout INTEGER 超时时间(毫秒)(默认:30000)
--wait INTEGER 额外的等待时间(毫秒)(默认:0)
-s, --css-selector TEXT 提取特定内容
--wait-selector TEXT 等待选择器出现后再继续
--locale TEXT 用户区域设置(默认:系统)
--real-chrome 使用已安装的 Chrome 浏览器
--proxy TEXT 代理 URL
-H, --extra-headers TEXT 额外请求头(多个)
适用于具有反机器人保护或 Cloudflare 的网站:
# 绕过基本保护
scrapling extract stealthy-fetch "https://example.com" content.md
# 解决 Cloudflare 挑战
scrapling extract stealthy-fetch "https://nopecha.com/demo/cloudflare" data.txt \
--solve-cloudflare \
--css-selector "#padded_content a"
# 使用代理保持匿名(设置 PROXY_URL 环境变量)
scrapling extract stealthy-fetch "https://site.com" content.md \
--proxy "$PROXY_URL"
# 隐藏 Canvas 指纹
scrapling extract stealthy-fetch "https://example.com" content.md \
--hide-canvas \
--block-webrtc
stealthy-fetch 选项:(包含 fetch 的所有选项,外加)
--block-webrtc 完全阻止 WebRTC
--solve-cloudflare 解决 Cloudflare 挑战
--allow-webgl / --block-webgl 允许 WebGL(默认:True)
--hide-canvas 为 Canvas 操作添加噪声
使用 -s 或 --css-selector 标志提取特定内容:
# 提取所有文章
scrapling extract get "https://blog.example.com" articles.md -s "article"
# 提取特定类
scrapling extract get "https://example.com" titles.txt -s ".title"
# 按 ID 提取
scrapling extract get "https://example.com" content.md -s "#main-content"
# 提取链接(href 属性)
scrapling extract get "https://example.com" links.txt -s "a::attr(href)"
# 仅提取文本
scrapling extract get "https://example.com" titles.txt -s "h1::text"
# 使用 fetch 提取多个元素
scrapling extract fetch "https://example.com" products.md \
-s ".product-card" \
--network-idle
scrapling --help
scrapling extract --help
scrapling extract get --help
scrapling extract post --help
scrapling extract fetch --help
scrapling extract stealthy-fetch --help
每周安装数
129
代码仓库
首次出现
2026年2月19日
安全审计
安装于
gemini-cli126
codex126
opencode126
amp124
github-copilot124
kimi-cli124
# Basic website download as text
scrapling extract get "https://example.com" page_content.txt
# Download as markdown
scrapling extract get "https://blog.example.com" article.md
# Save raw HTML
scrapling extract get "https://example.com" page.html
| Use Case | Command |
|---|---|
| Simple websites, blogs, news articles | get |
| Modern web apps, dynamic content (JavaScript) | fetch |
| Protected sites, Cloudflare, anti-bot | stealthy-fetch |
| Form submissions, APIs | post, put, delete |
Most common command for downloading website content:
# Basic download
scrapling extract get "https://news.site.com" news.md
# Download with custom timeout
scrapling extract get "https://example.com" content.txt --timeout 60
# Extract specific content using CSS selectors
scrapling extract get "https://blog.example.com" articles.md --css-selector "article"
# Send request with cookies
scrapling extract get "https://scrapling.requestcatcher.com" content.md \
--cookies "session=abc123; user=john"
# Add user agent
scrapling extract get "https://api.site.com" data.json \
-H "User-Agent: MyBot 1.0"
# Add multiple headers
scrapling extract get "https://site.com" page.html \
-H "Accept: text/html" \
-H "Accept-Language: en-US"
# With query parameters
scrapling extract get "https://api.example.com" data.json \
-p "page=1" -p "limit=10"
GET options:
-H, --headers TEXT HTTP headers "Key: Value" (multiple allowed)
--cookies TEXT Cookies "name1=value1;name2=value2"
--timeout INTEGER Request timeout in seconds (default: 30)
--proxy TEXT Proxy URL from $PROXY_URL env variable
-s, --css-selector TEXT Extract specific content with CSS selector
-p, --params TEXT Query parameters "key=value" (multiple)
--follow-redirects / --no-follow-redirects (default: True)
--verify / --no-verify SSL verification (default: True)
--impersonate TEXT Browser to impersonate (chrome, firefox)
--stealthy-headers / --no-stealthy-headers (default: True)
# Submit form data
scrapling extract post "https://api.site.com/search" results.html \
--data "query=python&type=tutorial"
# Send JSON data
scrapling extract post "https://api.site.com" response.json \
--json '{"username": "test", "action": "search"}'
POST options: (same as GET plus)
-d, --data TEXT Form data "param1=value1¶m2=value2"
-j, --json TEXT JSON data as string
# Send data
scrapling extract put "https://api.example.com" results.html \
--data "update=info" \
--impersonate "firefox"
# Send JSON data
scrapling extract put "https://api.example.com" response.json \
--json '{"username": "test", "action": "search"}'
scrapling extract delete "https://api.example.com/resource" response.txt
# With impersonation
scrapling extract delete "https://api.example.com/" response.txt \
--impersonate "chrome"
Use browser-based fetching for JavaScript-heavy sites or when HTTP requests fail.
For websites that load content dynamically or have slight protection:
# Wait for JavaScript to load and network activity to finish
scrapling extract fetch "https://example.com" content.md --network-idle
# Wait for specific element to appear
scrapling extract fetch "https://example.com" data.txt \
--wait-selector ".content-loaded"
# Visible browser mode for debugging
scrapling extract fetch "https://example.com" page.html \
--no-headless --disable-resources
# Use installed Chrome browser
scrapling extract fetch "https://example.com" content.md --real-chrome
# With CSS selector extraction
scrapling extract fetch "https://example.com" articles.md \
--css-selector "article" \
--network-idle
fetch options:
--headless / --no-headless Run browser headless (default: True)
--disable-resources Drop unnecessary resources for speed boost
--network-idle Wait for network idle
--timeout INTEGER Timeout in milliseconds (default: 30000)
--wait INTEGER Additional wait time in ms (default: 0)
-s, --css-selector TEXT Extract specific content
--wait-selector TEXT Wait for selector before proceeding
--locale TEXT User locale (default: system)
--real-chrome Use installed Chrome browser
--proxy TEXT Proxy URL
-H, --extra-headers TEXT Extra headers (multiple)
For websites with anti-bot protection or Cloudflare:
# Bypass basic protection
scrapling extract stealthy-fetch "https://example.com" content.md
# Solve Cloudflare challenges
scrapling extract stealthy-fetch "https://nopecha.com/demo/cloudflare" data.txt \
--solve-cloudflare \
--css-selector "#padded_content a"
# Use proxy for anonymity (set PROXY_URL environment variable)
scrapling extract stealthy-fetch "https://site.com" content.md \
--proxy "$PROXY_URL"
# Hide canvas fingerprint
scrapling extract stealthy-fetch "https://example.com" content.md \
--hide-canvas \
--block-webrtc
stealthy-fetch options: (same as fetch plus)
--block-webrtc Block WebRTC entirely
--solve-cloudflare Solve Cloudflare challenges
--allow-webgl / --block-webgl Allow WebGL (default: True)
--hide-canvas Add noise to canvas operations
Extract specific content with the -s or --css-selector flag:
# Extract all articles
scrapling extract get "https://blog.example.com" articles.md -s "article"
# Extract specific class
scrapling extract get "https://example.com" titles.txt -s ".title"
# Extract by ID
scrapling extract get "https://example.com" content.md -s "#main-content"
# Extract links (href attributes)
scrapling extract get "https://example.com" links.txt -s "a::attr(href)"
# Extract text only
scrapling extract get "https://example.com" titles.txt -s "h1::text"
# Extract multiple elements with fetch
scrapling extract fetch "https://example.com" products.md \
-s ".product-card" \
--network-idle
scrapling --help
scrapling extract --help
scrapling extract get --help
scrapling extract post --help
scrapling extract fetch --help
scrapling extract stealthy-fetch --help
Weekly Installs
129
Repository
First Seen
Feb 19, 2026
Security Audits
Installed on
gemini-cli126
codex126
opencode126
amp124
github-copilot124
kimi-cli124
Skills CLI 使用指南:AI Agent 技能包管理器安装与管理教程
36,300 周安装