重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
search-console by openclaudia/openclaudia-skills
npx skills add https://github.com/openclaudia/openclaudia-skills --skill search-console从 Google Search Console API 拉取搜索表现数据、索引覆盖情况和核心网页指标。
需要 Google OAuth 凭据:
GOOGLE_CLIENT_IDGOOGLE_CLIENT_SECREThttps://www.googleapis.com/auth/webmasters.readonly 范围的有效的 OAuth 访问令牌在 .env、.env.local 或 ~/.claude/.env.global 中设置凭据。
# 步骤 1:授权 URL(用户在浏览器中访问)
echo "https://accounts.google.com/o/oauth2/v2/auth?client_id=${GOOGLE_CLIENT_ID}&redirect_uri=urn:ietf:wg:oauth:2.0:oob&scope=https://www.googleapis.com/auth/webmasters.readonly&response_type=code&access_type=offline"
# 步骤 2:用授权码换取令牌
curl -s -X POST "https://oauth2.googleapis.com/token" \
-d "code={AUTH_CODE}" \
-d "client_id=${GOOGLE_CLIENT_ID}" \
-d "client_secret=${GOOGLE_CLIENT_SECRET}" \
-d "redirect_uri=urn:ietf:wg:oauth:2.0:oob" \
-d "grant_type=authorization_code"
# 步骤 3:刷新过期的令牌
curl -s -X POST "https://oauth2.googleapis.com/token" \
-d "refresh_token={REFRESH_TOKEN}" \
-d "client_id=${GOOGLE_CLIENT_ID}" \
-d "client_secret=${GOOGLE_CLIENT_SECRET}" \
-d "grant_type=refresh_token"
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
curl -s -H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
"https://www.googleapis.com/webmasters/v3/sites" \
| python3 -c "
import json, sys
data = json.load(sys.stdin)
for site in data.get('siteEntry', []):
print(f\"{site['siteUrl']} | Permission: {site['permissionLevel']}\")
"
站点 URL 的格式为 https://example.com/(URL 前缀)或 sc-domain:example.com(域名属性)。
核心报告:查询词、页面、点击次数、展示次数、点击率和平均排名。
POST https://www.googleapis.com/webmasters/v3/sites/{siteUrl}/searchAnalytics/query
注意:{siteUrl} 必须进行 URL 编码(例如 https%3A%2F%2Fexample.com%2F 或 sc-domain%3Aexample.com)。
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/sc-domain%3Aexample.com/searchAnalytics/query" \
-H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "2024-01-01",
"endDate": "2024-03-31",
"dimensions": ["query"],
"rowLimit": 50,
"startRow": 0
}'
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/sc-domain%3Aexample.com/searchAnalytics/query" \
-H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "2024-01-01",
"endDate": "2024-03-31",
"dimensions": ["page"],
"rowLimit": 50
}'
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/sc-domain%3Aexample.com/searchAnalytics/query" \
-H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "2024-01-01",
"endDate": "2024-03-31",
"dimensions": ["query", "page"],
"rowLimit": 100,
"dimensionFilterGroups": [{
"filters": [{
"dimension": "page",
"operator": "contains",
"expression": "/blog/"
}]
}]
}'
| 维度 | 描述 |
|---|---|
query | 搜索查询词 |
page | URL |
country | 国家代码 (ISO 3166-1 alpha-3) |
device | DESKTOP, MOBILE, TABLET |
date | 具体日期 |
searchAppearance | 富媒体搜索结果类型 |
curl -s -X POST "..." | python3 -c "
import json, sys
data = json.load(sys.stdin)
print(f\"{'Query':<50} {'Clicks':>8} {'Impr':>8} {'CTR':>8} {'Pos':>6}\")
print('-' * 82)
for row in data.get('rows', []):
keys = ' + '.join(row.get('keys', []))
print(f\"{keys:<50} {row['clicks']:>8} {row['impressions']:>8} {row['ctr']*100:>7.1f}% {row['position']:>6.1f}\")
"
跟踪查询词和页面的每日趋势。
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/sc-domain%3Aexample.com/searchAnalytics/query" \
-H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "2024-01-01",
"endDate": "2024-03-31",
"dimensions": ["date"],
"rowLimit": 1000
}'
要跟踪特定查询词随时间的变化:
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/sc-domain%3Aexample.com/searchAnalytics/query" \
-H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "2024-01-01",
"endDate": "2024-03-31",
"dimensions": ["date"],
"dimensionFilterGroups": [{
"filters": [{
"dimension": "query",
"operator": "equals",
"expression": "your target keyword"
}]
}]
}'
检查特定 URL 是否被索引。
POST https://searchconsole.googleapis.com/v1/urlInspection/index:inspect
curl -s -X POST \
"https://searchconsole.googleapis.com/v1/urlInspection/index:inspect" \
-H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"inspectionUrl": "https://example.com/page-to-check",
"siteUrl": "sc-domain:example.com"
}'
| 字段 | 描述 |
|---|---|
inspectionResult.indexStatusResult.coverageState | Submitted and indexed, Crawled - currently not indexed 等 |
inspectionResult.indexStatusResult.robotsTxtState | ALLOWED 或 DISALLOWED |
inspectionResult.indexStatusResult.indexingState | INDEXING_ALLOWED 或 INDEXING_NOT_ALLOWED |
inspectionResult.indexStatusResult.lastCrawlTime | Googlebot 上次抓取的时间 |
inspectionResult.indexStatusResult.crawledAs | DESKTOP 或 MOBILE |
inspectionResult.mobileUsabilityResult.verdict | PASS, FAIL, 或 VERDICT_UNSPECIFIED |
列出并检查站点地图状态。
curl -s -H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
"https://www.googleapis.com/webmasters/v3/sites/sc-domain%3Aexample.com/sitemaps" \
| python3 -c "
import json, sys
data = json.load(sys.stdin)
for sm in data.get('sitemap', []):
print(f\"URL: {sm['path']}\")
print(f\" Type: {sm.get('type','')} | Submitted: {sm.get('lastSubmitted','')}\")
print(f\" URLs discovered: {sm.get('contents',[{}])[0].get('submitted','?')} | Indexed: {sm.get('contents',[{}])[0].get('indexed','?')}\")
print()
"
curl -s -X PUT -H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
"https://www.googleapis.com/webmasters/v3/sites/sc-domain%3Aexample.com/sitemaps/https%3A%2F%2Fexample.com%2Fsitemap.xml"
使用 Search Console 数据寻找 SEO 机会。
展示次数多但点击率低的查询词表明标题/描述需要优化。
# 拉取查询词数据,然后筛选:展示次数 > 100 AND 点击率 < 0.03 AND 排名 < 20
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/sc-domain%3Aexample.com/searchAnalytics/query" \
-H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "2024-01-01",
"endDate": "2024-03-31",
"dimensions": ["query", "page"],
"rowLimit": 1000
}' | python3 -c "
import json, sys
data = json.load(sys.stdin)
print('== 低点击率机会(高展示次数,低点击率,排名好) ==')
print(f\"{'Query':<40} {'Page':<40} {'Impr':>6} {'CTR':>7} {'Pos':>5}\")
for row in data.get('rows', []):
if row['impressions'] > 100 and row['ctr'] < 0.03 and row['position'] < 20:
print(f\"{row['keys'][0]:<40} {row['keys'][1][-40:]:<40} {row['impressions']:>6} {row['ctr']*100:>6.1f}% {row['position']:>5.1f}\")
"
排名在第 1-2 页,通过内容优化可以提升到前 5 位的查询词。
# 筛选排名在 5 到 20 之间且展示次数尚可的数据
curl -s -X POST "..." | python3 -c "
import json, sys
data = json.load(sys.stdin)
print('== 触手可及的关键词(排名 5-20 位) ==')
opps = [r for r in data.get('rows',[]) if 5 <= r['position'] <= 20 and r['impressions'] > 50]
opps.sort(key=lambda x: x['impressions'], reverse=True)
for row in opps[:30]:
print(f\"{row['keys'][0]:<50} Pos: {row['position']:>5.1f} Impr: {row['impressions']:>6} Clicks: {row['clicks']:>4}\")
"
查找多个页面竞争同一个关键词的查询词。
# 拉取查询词+页面数据,然后按查询词分组以查找重复项
curl -s -X POST "..." | python3 -c "
import json, sys
from collections import defaultdict
data = json.load(sys.stdin)
query_pages = defaultdict(list)
for row in data.get('rows', []):
query_pages[row['keys'][0]].append({
'page': row['keys'][1],
'clicks': row['clicks'],
'impressions': row['impressions'],
'position': row['position']
})
print('== 关键词蚕食(同一查询词对应多个页面) ==')
for query, pages in sorted(query_pages.items(), key=lambda x: -sum(p['impressions'] for p in x[1])):
if len(pages) > 1:
total_impr = sum(p['impressions'] for p in pages)
if total_impr > 100:
print(f\"\nQuery: {query} ({total_impr} 总展示次数)\")
for p in sorted(pages, key=lambda x: -x['impressions']):
print(f\" {p['page'][-60:]} Pos: {p['position']:.1f} Impr: {p['impressions']} Clicks: {p['clicks']}\")
"
当需要完整的 GSC 审计时:
## Search Console 审计:{domain}
### 周期:{date range}
### 摘要
| 指标 | 当前 | 之前 | 变化 |
|--------|---------|----------|--------|
| 点击次数 | X | Y | +Z% |
| 展示次数 | X | Y | +Z% |
| 平均点击率 | X% | Y% | +Z pp |
| 平均排名 | X | Y | +Z |
### 热门查询词
| 查询词 | 点击次数 | 展示次数 | 点击率 | 排名 |
|-------|--------|-------------|-----|----------|
| ... | ... | ... | ... | ... |
### 优化机会
#### 标题/描述优化(高展示次数,低点击率)
1. "{query}" - {impressions} 次展示,{ctr}% 点击率,排名 {pos}
- 页面:{url}
- 建议:...
#### 内容优化(触手可及)
1. "{query}" - 排名 {pos},{impressions} 次展示
- 操作:将 {query} 添加到 H2 标题,扩展关于 {topic} 的部分
#### 关键词蚕食修复
1. "{query}" 出现在 {n} 个页面上
- 合并到:{best_url}
- 重定向/禁止索引:{other_urls}
| 错误 | 原因 | 修复方法 |
|---|---|---|
| 403 | 无权访问此属性 | 在 GSC 中验证所有权 |
| 400 | 无效的日期范围 | 日期必须在过去 16 个月内 |
| 空行 | 没有匹配筛选条件的数据 | 放宽日期范围或移除筛选条件 |
每周安装次数
62
代码仓库
GitHub 星标数
316
首次出现
Feb 14, 2026
安全审计
安装于
opencode56
gemini-cli55
claude-code54
codex52
github-copilot51
cursor50
Pull search performance data, index coverage, and Core Web Vitals from Google Search Console API.
Requires Google OAuth credentials:
GOOGLE_CLIENT_IDGOOGLE_CLIENT_SECREThttps://www.googleapis.com/auth/webmasters.readonly scopeSet credentials in .env, .env.local, or ~/.claude/.env.global.
# Step 1: Authorization URL (user visits in browser)
echo "https://accounts.google.com/o/oauth2/v2/auth?client_id=${GOOGLE_CLIENT_ID}&redirect_uri=urn:ietf:wg:oauth:2.0:oob&scope=https://www.googleapis.com/auth/webmasters.readonly&response_type=code&access_type=offline"
# Step 2: Exchange code for tokens
curl -s -X POST "https://oauth2.googleapis.com/token" \
-d "code={AUTH_CODE}" \
-d "client_id=${GOOGLE_CLIENT_ID}" \
-d "client_secret=${GOOGLE_CLIENT_SECRET}" \
-d "redirect_uri=urn:ietf:wg:oauth:2.0:oob" \
-d "grant_type=authorization_code"
# Step 3: Refresh expired token
curl -s -X POST "https://oauth2.googleapis.com/token" \
-d "refresh_token={REFRESH_TOKEN}" \
-d "client_id=${GOOGLE_CLIENT_ID}" \
-d "client_secret=${GOOGLE_CLIENT_SECRET}" \
-d "grant_type=refresh_token"
curl -s -H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
"https://www.googleapis.com/webmasters/v3/sites" \
| python3 -c "
import json, sys
data = json.load(sys.stdin)
for site in data.get('siteEntry', []):
print(f\"{site['siteUrl']} | Permission: {site['permissionLevel']}\")
"
The site URL format is either https://example.com/ (URL prefix) or sc-domain:example.com (domain property).
The core report: queries, pages, clicks, impressions, CTR, and average position.
POST https://www.googleapis.com/webmasters/v3/sites/{siteUrl}/searchAnalytics/query
Note: The {siteUrl} must be URL-encoded (e.g., https%3A%2F%2Fexample.com%2F or sc-domain%3Aexample.com).
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/sc-domain%3Aexample.com/searchAnalytics/query" \
-H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "2024-01-01",
"endDate": "2024-03-31",
"dimensions": ["query"],
"rowLimit": 50,
"startRow": 0
}'
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/sc-domain%3Aexample.com/searchAnalytics/query" \
-H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "2024-01-01",
"endDate": "2024-03-31",
"dimensions": ["page"],
"rowLimit": 50
}'
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/sc-domain%3Aexample.com/searchAnalytics/query" \
-H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "2024-01-01",
"endDate": "2024-03-31",
"dimensions": ["query", "page"],
"rowLimit": 100,
"dimensionFilterGroups": [{
"filters": [{
"dimension": "page",
"operator": "contains",
"expression": "/blog/"
}]
}]
}'
| Dimension | Description |
|---|---|
query | Search query |
page | URL |
country | Country code (ISO 3166-1 alpha-3) |
device | DESKTOP, MOBILE, TABLET |
date |
curl -s -X POST "..." | python3 -c "
import json, sys
data = json.load(sys.stdin)
print(f\"{'Query':<50} {'Clicks':>8} {'Impr':>8} {'CTR':>8} {'Pos':>6}\")
print('-' * 82)
for row in data.get('rows', []):
keys = ' + '.join(row.get('keys', []))
print(f\"{keys:<50} {row['clicks']:>8} {row['impressions']:>8} {row['ctr']*100:>7.1f}% {row['position']:>6.1f}\")
"
Track daily trends for queries and pages.
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/sc-domain%3Aexample.com/searchAnalytics/query" \
-H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "2024-01-01",
"endDate": "2024-03-31",
"dimensions": ["date"],
"rowLimit": 1000
}'
To track a specific query over time:
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/sc-domain%3Aexample.com/searchAnalytics/query" \
-H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "2024-01-01",
"endDate": "2024-03-31",
"dimensions": ["date"],
"dimensionFilterGroups": [{
"filters": [{
"dimension": "query",
"operator": "equals",
"expression": "your target keyword"
}]
}]
}'
Check if a specific URL is indexed.
POST https://searchconsole.googleapis.com/v1/urlInspection/index:inspect
curl -s -X POST \
"https://searchconsole.googleapis.com/v1/urlInspection/index:inspect" \
-H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"inspectionUrl": "https://example.com/page-to-check",
"siteUrl": "sc-domain:example.com"
}'
| Field | Description |
|---|---|
inspectionResult.indexStatusResult.coverageState | Submitted and indexed, Crawled - currently not indexed, etc. |
inspectionResult.indexStatusResult.robotsTxtState | ALLOWED or DISALLOWED |
inspectionResult.indexStatusResult.indexingState | INDEXING_ALLOWED or |
List and check sitemap status.
curl -s -H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
"https://www.googleapis.com/webmasters/v3/sites/sc-domain%3Aexample.com/sitemaps" \
| python3 -c "
import json, sys
data = json.load(sys.stdin)
for sm in data.get('sitemap', []):
print(f\"URL: {sm['path']}\")
print(f\" Type: {sm.get('type','')} | Submitted: {sm.get('lastSubmitted','')}\")
print(f\" URLs discovered: {sm.get('contents',[{}])[0].get('submitted','?')} | Indexed: {sm.get('contents',[{}])[0].get('indexed','?')}\")
print()
"
curl -s -X PUT -H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
"https://www.googleapis.com/webmasters/v3/sites/sc-domain%3Aexample.com/sitemaps/https%3A%2F%2Fexample.com%2Fsitemap.xml"
Use Search Console data to find SEO opportunities.
Queries with many impressions but low CTR suggest the title/description needs optimization.
# Pull queries, then filter for: impressions > 100 AND ctr < 0.03 AND position < 20
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/sc-domain%3Aexample.com/searchAnalytics/query" \
-H "Authorization: Bearer ${GSC_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "2024-01-01",
"endDate": "2024-03-31",
"dimensions": ["query", "page"],
"rowLimit": 1000
}' | python3 -c "
import json, sys
data = json.load(sys.stdin)
print('== Low CTR Opportunities (High impressions, low CTR, good position) ==')
print(f\"{'Query':<40} {'Page':<40} {'Impr':>6} {'CTR':>7} {'Pos':>5}\")
for row in data.get('rows', []):
if row['impressions'] > 100 and row['ctr'] < 0.03 and row['position'] < 20:
print(f\"{row['keys'][0]:<40} {row['keys'][1][-40:]:<40} {row['impressions']:>6} {row['ctr']*100:>6.1f}% {row['position']:>5.1f}\")
"
Queries ranking on page 1-2 that could be pushed to top 5 with content optimization.
# Filter for position between 5 and 20 with decent impressions
curl -s -X POST "..." | python3 -c "
import json, sys
data = json.load(sys.stdin)
print('== Striking Distance Keywords (Position 5-20) ==')
opps = [r for r in data.get('rows',[]) if 5 <= r['position'] <= 20 and r['impressions'] > 50]
opps.sort(key=lambda x: x['impressions'], reverse=True)
for row in opps[:30]:
print(f\"{row['keys'][0]:<50} Pos: {row['position']:>5.1f} Impr: {row['impressions']:>6} Clicks: {row['clicks']:>4}\")
"
Find queries where multiple pages compete for the same keyword.
# Pull query+page data, then group by query to find duplicates
curl -s -X POST "..." | python3 -c "
import json, sys
from collections import defaultdict
data = json.load(sys.stdin)
query_pages = defaultdict(list)
for row in data.get('rows', []):
query_pages[row['keys'][0]].append({
'page': row['keys'][1],
'clicks': row['clicks'],
'impressions': row['impressions'],
'position': row['position']
})
print('== Keyword Cannibalization (multiple pages for same query) ==')
for query, pages in sorted(query_pages.items(), key=lambda x: -sum(p['impressions'] for p in x[1])):
if len(pages) > 1:
total_impr = sum(p['impressions'] for p in pages)
if total_impr > 100:
print(f\"\nQuery: {query} ({total_impr} total impressions)\")
for p in sorted(pages, key=lambda x: -x['impressions']):
print(f\" {p['page'][-60:]} Pos: {p['position']:.1f} Impr: {p['impressions']} Clicks: {p['clicks']}\")
"
When asked for a complete GSC audit:
## Search Console Audit: {domain}
### Period: {date range}
### Summary
| Metric | Current | Previous | Change |
|--------|---------|----------|--------|
| Clicks | X | Y | +Z% |
| Impressions | X | Y | +Z% |
| Avg CTR | X% | Y% | +Z pp |
| Avg Position | X | Y | +Z |
### Top Queries
| Query | Clicks | Impressions | CTR | Position |
|-------|--------|-------------|-----|----------|
| ... | ... | ... | ... | ... |
### Optimization Opportunities
#### Title/Description Optimization (High Impressions, Low CTR)
1. "{query}" - {impressions} impressions, {ctr}% CTR, position {pos}
- Page: {url}
- Recommendation: ...
#### Content Optimization (Striking Distance)
1. "{query}" - position {pos}, {impressions} impressions
- Action: Add {query} to H2, expand section on {topic}
#### Cannibalization Fixes
1. "{query}" appears on {n} pages
- Consolidate to: {best_url}
- Redirect/noindex: {other_urls}
| Error | Cause | Fix |
|---|---|---|
| 403 | No access to this property | Verify ownership in GSC |
| 400 | Invalid date range | Dates must be within last 16 months |
| Empty rows | No data matching filters | Broaden date range or remove filters |
Weekly Installs
62
Repository
GitHub Stars
316
First Seen
Feb 14, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode56
gemini-cli55
claude-code54
codex52
github-copilot51
cursor50
Python PDF处理教程:合并拆分、提取文本表格、创建PDF文件
69,800 周安装
| Individual date |
searchAppearance | Rich result type |
INDEXING_NOT_ALLOWEDinspectionResult.indexStatusResult.lastCrawlTime | When Googlebot last crawled |
inspectionResult.indexStatusResult.crawledAs | DESKTOP or MOBILE |
inspectionResult.mobileUsabilityResult.verdict | PASS, FAIL, or VERDICT_UNSPECIFIED |