web-scraping-automation by aaaaqwq/claude-code-skills
npx skills add https://github.com/aaaaqwq/claude-code-skills --skill web-scraping-automation此技能专门用于自动化网站数据爬取和 API 接口调用,包括:
目标分析 :
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
方案设计 :
脚本开发 :
测试优化 :
import requests
from bs4 import BeautifulSoup
def scrape_website(url):
headers = {'User-Agent': 'Mozilla/5.0'}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, 'html.parser')
# 提取数据
data = []
for item in soup.select('.product'):
data.append({
'title': item.select_one('.title').text,
'price': item.select_one('.price').text
})
return data
import requests
def call_api(endpoint, params=None):
headers = {
'Authorization': 'Bearer YOUR_TOKEN',
'Content-Type': 'application/json'
}
response = requests.get(endpoint, headers=headers, params=params)
return response.json()
from selenium import webdriver
from selenium.webdriver.common.by import By
def scrape_dynamic_page(url):
driver = webdriver.Chrome()
driver.get(url)
# 等待页面加载
driver.implicitly_wait(10)
# 提取数据
elements = driver.find_elements(By.CLASS_NAME, 'item')
data = [elem.text for elem in elements]
driver.quit()
return data
每周安装量
166
代码仓库
GitHub 星标数
11
首次出现
Jan 22, 2026
安全审计
安装于
opencode141
cursor140
codex139
gemini-cli138
github-copilot131
kimi-cli110
此技能专门用于自动化网站数据爬取和 API 接口调用,包括:
目标分析 :
方案设计 :
脚本开发 :
测试优化 :
import requests
from bs4 import BeautifulSoup
def scrape_website(url):
headers = {'User-Agent': 'Mozilla/5.0'}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, 'html.parser')
# 提取数据
data = []
for item in soup.select('.product'):
data.append({
'title': item.select_one('.title').text,
'price': item.select_one('.price').text
})
return data
import requests
def call_api(endpoint, params=None):
headers = {
'Authorization': 'Bearer YOUR_TOKEN',
'Content-Type': 'application/json'
}
response = requests.get(endpoint, headers=headers, params=params)
return response.json()
from selenium import webdriver
from selenium.webdriver.common.by import By
def scrape_dynamic_page(url):
driver = webdriver.Chrome()
driver.get(url)
# 等待页面加载
driver.implicitly_wait(10)
# 提取数据
elements = driver.find_elements(By.CLASS_NAME, 'item')
data = [elem.text for elem in elements]
driver.quit()
return data
Weekly Installs
166
Repository
GitHub Stars
11
First Seen
Jan 22, 2026
Security Audits
Gen Agent Trust HubPassSocketFailSnykWarn
Installed on
opencode141
cursor140
codex139
gemini-cli138
github-copilot131
kimi-cli110
Skills CLI 使用指南:AI Agent 技能包管理器安装与管理教程
27,400 周安装