google-news by outsharp/shipp-skills
npx skills add https://github.com/outsharp/shipp-skills --skill google-newsGoogle News 是一个免费的新闻聚合器,它汇集了来自全球数千家出版商的新闻头条。Google 通过公开的 RSS 2.0 端点提供其新闻源,无需身份验证或 API 密钥。
https://news.google.com/rss
所有新闻源 URL 都是通过在此基础 URL 后附加路径和查询参数来构建的。
每个新闻源 URL 都接受以下查询参数来控制地区和语言:
| 参数 | 是否必需 | 描述 | 示例 |
|---|---|---|---|
hl | 是 | 界面语言 / 区域代码 | en-US, fr, , , , |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
dejapt-BRes-419gl | 是 | 国家 / 地理位置 (ISO 3166-1 alpha-2) | US, GB, IN, DE, JP, BR |
ceid | 是 | 复合区域键,格式为 {gl}:{language} | US:en, GB:en, DE:de, JP:ja, BR:pt-419 |
重要提示: 所有三个参数应保持一致。不匹配的值可能导致返回意外或空结果。
以下地区已经过测试并确认可以返回有效的 RSS 源(HTTP 200):
| 地区 | hl | gl | ceid | 示例 URL |
|---|---|---|---|---|
| 🇺🇸 美国 | en-US | US | US:en | https://news.google.com/rss?hl=en-US&gl=US&ceid=US:en |
| 🇬🇧 英国 | en-GB | GB | GB:en | https://news.google.com/rss?hl=en-GB&gl=GB&ceid=GB:en |
| 🇮🇳 印度 | en-IN | IN | IN:en | https://news.google.com/rss?hl=en-IN&gl=IN&ceid=IN:en |
| 🇦🇺 澳大利亚 | en-AU | AU | AU:en | https://news.google.com/rss?hl=en-AU&gl=AU&ceid=AU:en |
| 🇨🇦 加拿大 | en-CA | CA | CA:en | https://news.google.com/rss?hl=en-CA&gl=CA&ceid=CA:en |
| 🇩🇪 德国 | de | DE | DE:de | https://news.google.com/rss?hl=de&gl=DE&ceid=DE:de |
| 🇫🇷 法国 | fr | FR | FR:fr | https://news.google.com/rss?hl=fr&gl=FR&ceid=FR:fr |
| 🇯🇵 日本 | ja | JP | JP:ja | https://news.google.com/rss?hl=ja&gl=JP&ceid=JP:ja |
| 🇧🇷 巴西 | pt-BR | BR | BR:pt-419 | https://news.google.com/rss?hl=pt-BR&gl=BR&ceid=BR:pt-419 |
| 🇲🇽 墨西哥 | es-419 | MX | MX:es-419 | https://news.google.com/rss?hl=es-419&gl=MX&ceid=MX:es-419 |
| 🇮🇱 以色列 | en-IL | IL | IL:en | https://news.google.com/rss?hl=en-IL&gl=IL&ceid=IL:en |
返回指定地区的当前头条新闻。
URL 模式:
https://news.google.com/rss?hl={hl}&gl={gl}&ceid={gl}:{lang}
示例 — 美国头条新闻:
https://news.google.com/rss?hl=en-US&gl=US&ceid=US:en
返回特定新闻主题/板块的文章。
URL 模式:
https://news.google.com/rss/topics/{TOPIC_ID}?hl={hl}&gl={gl}&ceid={gl}:{lang}
已知主题 ID(英语,美国):
| 主题 | 主题 ID |
|---|---|
| 世界 | CAAqJggKIiBDQkFTRWdvSUwyMHZNRGx1YlY4U0FtVnVHZ0pWVXlnQVAB |
| 国家 / 美国 | CAAqIggKIhxDQkFTRHdvSkwyMHZNRGxqTjNjU0FtVnVLQUFQAQ |
| 商业 | CAAqJggKIiBDQkFTRWdvSUwyMHZNRGx6TVdZU0FtVnVHZ0pWVXlnQVAB |
| 科技 | CAAqJggKIiBDQkFTRWdvSUwyMHZNRGRqTVhZU0FtVnVHZ0pWVXlnQVAB |
| 娱乐 | CAAqJggKIiBDQkFTRWdvSUwyMHZNREpxYW5RU0FtVnVHZ0pWVXlnQVAB |
| 体育 | CAAqJggKIiBDQkFTRWdvSUwyMHZNRFp1ZEdvU0FtVnVHZ0pWVXlnQVAB |
| 科学 | CAAqJggKIiBDQkFTRWdvSUwyMHZNRFp0Y1RjU0FtVnVHZ0pWVXlnQVAB |
| 健康 | CAAqIQgKIhtDQkFTRGdvSUwyMHZNR3QwTlRFU0FtVnVLQUFQAQ |
示例 — 科技新闻(美国):
https://news.google.com/rss/topics/CAAqJggKIiBDQkFTRWdvSUwyMHZNRGRqTVhZU0FtVnVHZ0pWVXlnQVAB?hl=en-US&gl=US&ceid=US:en
注意: 主题 ID 是经过 base64 编码的 protocol buffer 字符串。它们可能因语言/地区而异。上面的 ID 适用于
en-US。要查找其他区域的主题 ID,请检查该区域 Google News 网站上的 RSS 链接。
返回与搜索查询匹配的文章。
URL 模式:
https://news.google.com/rss/search?q={query}&hl={hl}&gl={gl}&ceid={gl}:{lang}
查询修饰符:
| 修饰符 | 描述 | 示例 |
|---|---|---|
+ 或空格 | AND(默认) | q=artificial+intelligence |
OR | OR 运算符 | q=Tesla+OR+SpaceX |
- | 排除术语 | q=Apple+-fruit |
"..." | 精确短语(对引号进行 URL 编码) | q=%22climate+change%22 |
when:7d | 时间过滤器 — 最近 N 天/小时 | q=Bitcoin+when:7d |
when:1h | 时间过滤器 — 最近 1 小时 | q=breaking+news+when:1h |
after:YYYY-MM-DD | 指定日期之后的文章 | q=Olympics+after:2024-07-01 |
before:YYYY-MM-DD | 指定日期之前的文章 | q=Olympics+before:2024-08-15 |
site: | 限制在特定域名 | q=AI+site:reuters.com |
示例 — 搜索过去 7 天内关于“人工智能”的文章:
https://news.google.com/rss/search?q=artificial+intelligence+when:7d&hl=en-US&gl=US&ceid=US:en
所有新闻源都返回 RSS 2.0 XML。以下是通用结构:
<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
<channel>
<generator>NFE/5.0</generator>
<title>Top stories - Google News</title>
<link>https://news.google.com/?hl=en-US&gl=US&ceid=US:en</link>
<language>en-US</language>
<webMaster>news-webmaster@google.com</webMaster>
<copyright>...</copyright>
<lastBuildDate>Wed, 18 Feb 2026 20:50:00 GMT</lastBuildDate>
<item>
<title>Article headline - Publisher Name</title>
<link>https://news.google.com/rss/articles/...</link>
<guid isPermaLink="true">https://news.google.com/rss/articles/...</guid>
<pubDate>Wed, 18 Feb 2026 19:05:07 GMT</pubDate>
<description>
<!-- HTML ordered list of related articles -->
<ol>
<li><a href="...">Article Title</a> <font color="#6f6f6f">Publisher</font></li>
...
</ol>
</description>
<source url="https://publisher-domain.com">Publisher Name</source>
</item>
<!-- more <item> elements -->
</channel>
</rss>
<item> 的关键字段| 字段 | 描述 |
|---|---|
<title> | 标题文本,后跟 - Publisher Name |
<link> | Google News 重定向 URL。在浏览器中访问它会跳转到实际文章。 |
<guid> | 唯一标识符(与 <link> 相同) |
<pubDate> | RFC 2822 格式的发布日期 |
<description> | 包含相关/聚合文章有序列表(<ol>)的 HTML 片段,带有链接和出版商名称 |
<source url="..."> | 出版商名称和主页 URL |
curl -s "https://news.google.com/rss?hl=en-US&gl=US&ceid=US:en" \
| grep -oP '<title>\K[^<]+'
import feedparser
feed = feedparser.parse(
"https://news.google.com/rss?hl=en-US&gl=US&ceid=US:en"
)
for entry in feed.entries:
print(f"{entry.published} — {entry.title}")
print(f" Link: {entry.link}")
print()
TOPIC="CAAqJggKIiBDQkFTRWdvSUwyMHZNRGRqTVhZU0FtVnVHZ0pWVXlnQVAB"
curl -s "https://news.google.com/rss/topics/${TOPIC}?hl=en-US&gl=US&ceid=US:en" \
| xmllint --xpath '//item/title/text()' -
import feedparser
import urllib.parse
query = urllib.parse.quote("artificial intelligence when:7d")
url = f"https://news.google.com/rss/search?q={query}&hl=en-US&gl=US&ceid=US:en"
feed = feedparser.parse(url)
for entry in feed.entries[:10]:
print(f"• {entry.title}")
const https = require("https");
const { parseStringPromise } = require("xml2js");
const url =
"https://news.google.com/rss?hl=en-GB&gl=GB&ceid=GB:en";
https.get(url, (res) => {
let data = "";
res.on("data", (chunk) => (data += chunk));
res.on("end", async () => {
const result = await parseStringPromise(data);
const items = result.rss.channel[0].item || [];
items.slice(0, 10).forEach((item) => {
console.log(item.title[0]);
});
});
});
import feedparser
from html.parser import HTMLParser
class RelatedParser(HTMLParser):
def __init__(self):
super().__init__()
self.articles = []
self._in_a = False
self._href = ""
self._text = ""
def handle_starttag(self, tag, attrs):
if tag == "a":
self._in_a = True
self._href = dict(attrs).get("href", "")
self._text = ""
def handle_endtag(self, tag):
if tag == "a" and self._in_a:
self.articles.append({"title": self._text, "link": self._href})
self._in_a = False
def handle_data(self, data):
if self._in_a:
self._text += data
feed = feedparser.parse(
"https://news.google.com/rss?hl=en-US&gl=US&ceid=US:en"
)
for entry in feed.entries[:3]:
print(f"\n=== {entry.title} ===")
parser = RelatedParser()
parser.feed(entry.description)
for art in parser.articles:
print(f" • {art['title']}")
print(f" {art['link']}")
import feedparser
REGIONS = {
"US": "hl=en-US&gl=US&ceid=US:en",
"UK": "hl=en-GB&gl=GB&ceid=GB:en",
"DE": "hl=de&gl=DE&ceid=DE:de",
"JP": "hl=ja&gl=JP&ceid=JP:ja",
"BR": "hl=pt-BR&gl=BR&ceid=BR:pt-419",
}
for region, params in REGIONS.items():
feed = feedparser.parse(f"https://news.google.com/rss?{params}")
print(f"\n--- {region} Top 3 ---")
for entry in feed.entries[:3]:
print(f" • {entry.title}")
#!/usr/bin/env bash
FEED="https://news.google.com/rss/search?q=breaking+news+when:1h&hl=en-US&gl=US&ceid=US:en"
SEEN_FILE="/tmp/gnews_seen.txt"
touch "$SEEN_FILE"
while true; do
curl -s "$FEED" | grep -oP '<guid[^>]*>\K[^<]+' | while read -r guid; do
if ! grep -qF "$guid" "$SEEN_FILE"; then
echo "$guid" >> "$SEEN_FILE"
TITLE=$(curl -s "$FEED" | grep -oP "<item>.*?<guid[^>]*>${guid}.*?</item>" \
| grep -oP '<title>\K[^<]+' | head -1)
echo "[NEW] $TITLE"
fi
done
sleep 120
done
RSS 源中的文章链接指向 https://news.google.com/rss/articles/...,该链接会重定向(HTTP 302/303)到实际的出版商 URL。要解析最终 URL:
curl -Ls -o /dev/null -w '%{url_effective}' \
"https://news.google.com/rss/articles/CBMiWkFV..."
import requests
response = requests.head(
"https://news.google.com/rss/articles/CBMiWkFV...",
allow_redirects=True,
timeout=10,
)
print(response.url) # final publisher URL
Google 没有为 RSS 源发布官方的速率限制。根据社区观察:
| 指南 | 建议 |
|---|---|
| 轮询间隔 | 对同一新闻源的请求间隔 ≥ 60 秒 |
| 并发请求 | 保持低于约 10 个并发连接 |
| 突发行为 | 快速突发可能导致 HTTP 429 或 CAPTCHA 挑战 |
| User-Agent | 使用描述性的 User-Agent;空字符串或类似机器人的字符串可能被阻止 |
如果收到 HTTP 429 响应,请以指数方式退避(例如,1 分钟 → 2 分钟 → 4 分钟)。
| HTTP 状态码 | 含义 | 操作 |
|---|---|---|
| 200 | 成功 | 解析 RSS XML |
| 301/302 | 重定向 | 跟随重定向(大多数 HTTP 客户端会自动执行此操作) |
| 404 | 新闻源未找到 | 检查 URL、主题 ID 或区域参数 |
| 429 | 速率限制 | 退避并在延迟后重试 |
| 5xx | 服务器错误 | 使用指数退避重试 |
feedparser — 它可以处理 RSS 解析、日期标准化和编码边缘情况。q=Tesla+site:reuters.com+when:30d 以获得精确结果。hl=de。检查该区域的 Google News 页面以找到正确的 ID。<description> 字段是 HTML — 它包含聚合/相关文章作为 <ol> 列表。解析 HTML 以提取每个故事的多来源。<title> 包含出版商信息 — 格式为 Headline text - Publisher Name。从右侧按 -(空格-短横线-空格)分割以分离它们。User-Agent: MyNewsBot/1.0 (contact@example.com)。某些环境没有 User-Agent 可能会被阻止。每周安装数
173
代码仓库
GitHub 星标数
2
首次出现
2026年2月18日
安全审计
安装于
codex171
kimi-cli170
gemini-cli170
amp170
github-copilot170
opencode170
Google News is a free news aggregator that collects headlines from thousands of publishers around the world. Google exposes its feeds via public RSS 2.0 endpoints that require no authentication or API key.
https://news.google.com/rss
All feed URLs are built by appending paths and query parameters to this base.
Every feed URL accepts the following query parameters to control region and language:
| Parameter | Required | Description | Example |
|---|---|---|---|
hl | Yes | Interface language / locale code | en-US, fr, de, ja, pt-BR, es-419 |
gl | Yes | Country / geographic location (ISO 3166-1 alpha-2) | US, GB, IN, DE, JP, BR |
ceid | Yes | Compound locale key in the form {gl}:{language} | US:en, GB:en, DE:de, JP:ja, BR:pt-419 |
Important: All three parameters should be consistent. Mismatched values may return unexpected or empty results.
The following locations have been tested and confirmed to return valid RSS feeds (HTTP 200):
| Location | hl | gl | ceid | Example URL |
|---|---|---|---|---|
| 🇺🇸 United States | en-US | US | US:en | https://news.google.com/rss?hl=en-US&gl=US&ceid=US:en |
| 🇬🇧 United Kingdom |
Returns the current top stories for a given location.
URL pattern:
https://news.google.com/rss?hl={hl}&gl={gl}&ceid={gl}:{lang}
Example — US top stories:
https://news.google.com/rss?hl=en-US&gl=US&ceid=US:en
Returns articles for a specific news topic / section.
URL pattern:
https://news.google.com/rss/topics/{TOPIC_ID}?hl={hl}&gl={gl}&ceid={gl}:{lang}
Known Topic IDs (English, US):
| Topic | Topic ID |
|---|---|
| World | CAAqJggKIiBDQkFTRWdvSUwyMHZNRGx1YlY4U0FtVnVHZ0pWVXlnQVAB |
| Nation / U.S. | CAAqIggKIhxDQkFTRHdvSkwyMHZNRGxqTjNjU0FtVnVLQUFQAQ |
| Business | CAAqJggKIiBDQkFTRWdvSUwyMHZNRGx6TVdZU0FtVnVHZ0pWVXlnQVAB |
| Technology | CAAqJggKIiBDQkFTRWdvSUwyMHZNRGRqTVhZU0FtVnVHZ0pWVXlnQVAB |
| Entertainment | CAAqJggKIiBDQkFTRWdvSUwyMHZNREpxYW5RU0FtVnVHZ0pWVXlnQVAB |
| Sports |
Example — Technology news (US):
https://news.google.com/rss/topics/CAAqJggKIiBDQkFTRWdvSUwyMHZNRGRqTVhZU0FtVnVHZ0pWVXlnQVAB?hl=en-US&gl=US&ceid=US:en
Note: Topic IDs are base64-encoded protocol buffer strings. They can differ by language/region. The IDs above are for
en-US. To find topic IDs for other locales, inspect the RSS link on the Google News website for that locale.
Returns articles matching a search query.
URL pattern:
https://news.google.com/rss/search?q={query}&hl={hl}&gl={gl}&ceid={gl}:{lang}
Query modifiers:
| Modifier | Description | Example |
|---|---|---|
+ or space | AND (default) | q=artificial+intelligence |
OR | OR operator | q=Tesla+OR+SpaceX |
- | Exclude term | q=Apple+-fruit |
"..." |
Example — search for "artificial intelligence" in the last 7 days:
https://news.google.com/rss/search?q=artificial+intelligence+when:7d&hl=en-US&gl=US&ceid=US:en
All feeds return RSS 2.0 XML. Here is the general structure:
<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
<channel>
<generator>NFE/5.0</generator>
<title>Top stories - Google News</title>
<link>https://news.google.com/?hl=en-US&gl=US&ceid=US:en</link>
<language>en-US</language>
<webMaster>news-webmaster@google.com</webMaster>
<copyright>...</copyright>
<lastBuildDate>Wed, 18 Feb 2026 20:50:00 GMT</lastBuildDate>
<item>
<title>Article headline - Publisher Name</title>
<link>https://news.google.com/rss/articles/...</link>
<guid isPermaLink="true">https://news.google.com/rss/articles/...</guid>
<pubDate>Wed, 18 Feb 2026 19:05:07 GMT</pubDate>
<description>
<!-- HTML ordered list of related articles -->
<ol>
<li><a href="...">Article Title</a> <font color="#6f6f6f">Publisher</font></li>
...
</ol>
</description>
<source url="https://publisher-domain.com">Publisher Name</source>
</item>
<!-- more <item> elements -->
</channel>
</rss>
<item>| Field | Description |
|---|---|
<title> | Headline text followed by - Publisher Name |
<link> | Google News redirect URL. Visiting it in a browser redirects to the actual article. |
<guid> | Unique identifier (same as <link>) |
<pubDate> | Publication date in RFC 2822 format |
<description> |
curl -s "https://news.google.com/rss?hl=en-US&gl=US&ceid=US:en" \
| grep -oP '<title>\K[^<]+'
import feedparser
feed = feedparser.parse(
"https://news.google.com/rss?hl=en-US&gl=US&ceid=US:en"
)
for entry in feed.entries:
print(f"{entry.published} — {entry.title}")
print(f" Link: {entry.link}")
print()
TOPIC="CAAqJggKIiBDQkFTRWdvSUwyMHZNRGRqTVhZU0FtVnVHZ0pWVXlnQVAB"
curl -s "https://news.google.com/rss/topics/${TOPIC}?hl=en-US&gl=US&ceid=US:en" \
| xmllint --xpath '//item/title/text()' -
import feedparser
import urllib.parse
query = urllib.parse.quote("artificial intelligence when:7d")
url = f"https://news.google.com/rss/search?q={query}&hl=en-US&gl=US&ceid=US:en"
feed = feedparser.parse(url)
for entry in feed.entries[:10]:
print(f"• {entry.title}")
const https = require("https");
const { parseStringPromise } = require("xml2js");
const url =
"https://news.google.com/rss?hl=en-GB&gl=GB&ceid=GB:en";
https.get(url, (res) => {
let data = "";
res.on("data", (chunk) => (data += chunk));
res.on("end", async () => {
const result = await parseStringPromise(data);
const items = result.rss.channel[0].item || [];
items.slice(0, 10).forEach((item) => {
console.log(item.title[0]);
});
});
});
import feedparser
from html.parser import HTMLParser
class RelatedParser(HTMLParser):
def __init__(self):
super().__init__()
self.articles = []
self._in_a = False
self._href = ""
self._text = ""
def handle_starttag(self, tag, attrs):
if tag == "a":
self._in_a = True
self._href = dict(attrs).get("href", "")
self._text = ""
def handle_endtag(self, tag):
if tag == "a" and self._in_a:
self.articles.append({"title": self._text, "link": self._href})
self._in_a = False
def handle_data(self, data):
if self._in_a:
self._text += data
feed = feedparser.parse(
"https://news.google.com/rss?hl=en-US&gl=US&ceid=US:en"
)
for entry in feed.entries[:3]:
print(f"\n=== {entry.title} ===")
parser = RelatedParser()
parser.feed(entry.description)
for art in parser.articles:
print(f" • {art['title']}")
print(f" {art['link']}")
import feedparser
REGIONS = {
"US": "hl=en-US&gl=US&ceid=US:en",
"UK": "hl=en-GB&gl=GB&ceid=GB:en",
"DE": "hl=de&gl=DE&ceid=DE:de",
"JP": "hl=ja&gl=JP&ceid=JP:ja",
"BR": "hl=pt-BR&gl=BR&ceid=BR:pt-419",
}
for region, params in REGIONS.items():
feed = feedparser.parse(f"https://news.google.com/rss?{params}")
print(f"\n--- {region} Top 3 ---")
for entry in feed.entries[:3]:
print(f" • {entry.title}")
#!/usr/bin/env bash
FEED="https://news.google.com/rss/search?q=breaking+news+when:1h&hl=en-US&gl=US&ceid=US:en"
SEEN_FILE="/tmp/gnews_seen.txt"
touch "$SEEN_FILE"
while true; do
curl -s "$FEED" | grep -oP '<guid[^>]*>\K[^<]+' | while read -r guid; do
if ! grep -qF "$guid" "$SEEN_FILE"; then
echo "$guid" >> "$SEEN_FILE"
TITLE=$(curl -s "$FEED" | grep -oP "<item>.*?<guid[^>]*>${guid}.*?</item>" \
| grep -oP '<title>\K[^<]+' | head -1)
echo "[NEW] $TITLE"
fi
done
sleep 120
done
Article links in the RSS feed point to https://news.google.com/rss/articles/... which redirect (HTTP 302/303) to the actual publisher URL. To resolve the final URL:
curl -Ls -o /dev/null -w '%{url_effective}' \
"https://news.google.com/rss/articles/CBMiWkFV..."
import requests
response = requests.head(
"https://news.google.com/rss/articles/CBMiWkFV...",
allow_redirects=True,
timeout=10,
)
print(response.url) # final publisher URL
Google does not publish official rate limits for the RSS feeds. Based on community observations:
| Guideline | Recommendation |
|---|---|
| Polling interval | ≥ 60 seconds between requests for the same feed |
| Concurrent requests | Keep below ~10 concurrent connections |
| Burst behavior | Rapid bursts may trigger HTTP 429 or CAPTCHA challenges |
| User-Agent | Use a descriptive User-Agent; empty or bot-like strings may be blocked |
If you receive an HTTP 429 response, back off exponentially (e.g., 1 min → 2 min → 4 min).
| HTTP Status | Meaning | Action |
|---|---|---|
| 200 | Success | Parse the RSS XML |
| 301/302 | Redirect | Follow the redirect (most HTTP clients do this automatically) |
| 404 | Feed not found | Check the URL, topic ID, or locale parameters |
| 429 | Rate limited | Back off and retry after a delay |
| 5xx | Server error | Retry with exponential backoff |
feedparser in Python — it handles RSS parsing, date normalization, and encoding edge cases.q=Tesla+site:reuters.com+when:30d for precise results.hl=de. Inspect the Google News page in that locale to find the correct ID.<description> field is HTML — it contains clustered/related articles as an <ol> list. Parse the HTML to extract multiple sources per story.<title> includes the publisher — the format is Headline text - Publisher Name. Split on - (space-dash-space) from the right to separate them.Weekly Installs
173
Repository
GitHub Stars
2
First Seen
Feb 18, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
codex171
kimi-cli170
gemini-cli170
amp170
github-copilot170
opencode170
Lark CLI Wiki API 使用指南:获取知识空间节点信息与权限管理
39,100 周安装
PrivateInvestigator 道德人员查找工具 | 公开数据调查、反向搜索与背景研究
69 周安装
TorchTitan:PyTorch原生分布式大语言模型预训练平台,支持4D并行与H100 GPU加速
69 周安装
screenshot 截图技能:跨平台桌面截图工具,支持macOS/Linux权限管理与多模式捕获
69 周安装
tmux进程管理最佳实践:交互式Shell初始化、会话命名与生命周期管理
69 周安装
Git Rebase Sync:安全同步分支的Git变基工具,解决冲突与备份
69 周安装
LinkedIn自动化工具 - Claude Code专属,自然对话拓展人脉,避免垃圾信息
69 周安装
en-GBGB |
GB:en |
https://news.google.com/rss?hl=en-GB&gl=GB&ceid=GB:en |
| 🇮🇳 India | en-IN | IN | IN:en | https://news.google.com/rss?hl=en-IN&gl=IN&ceid=IN:en |
| 🇦🇺 Australia | en-AU | AU | AU:en | https://news.google.com/rss?hl=en-AU&gl=AU&ceid=AU:en |
| 🇨🇦 Canada | en-CA | CA | CA:en | https://news.google.com/rss?hl=en-CA&gl=CA&ceid=CA:en |
| 🇩🇪 Germany | de | DE | DE:de | https://news.google.com/rss?hl=de&gl=DE&ceid=DE:de |
| 🇫🇷 France | fr | FR | FR:fr | https://news.google.com/rss?hl=fr&gl=FR&ceid=FR:fr |
| 🇯🇵 Japan | ja | JP | JP:ja | https://news.google.com/rss?hl=ja&gl=JP&ceid=JP:ja |
| 🇧🇷 Brazil | pt-BR | BR | BR:pt-419 | https://news.google.com/rss?hl=pt-BR&gl=BR&ceid=BR:pt-419 |
| 🇲🇽 Mexico | es-419 | MX | MX:es-419 | https://news.google.com/rss?hl=es-419&gl=MX&ceid=MX:es-419 |
| 🇮🇱 Israel | en-IL | IL | IL:en | https://news.google.com/rss?hl=en-IL&gl=IL&ceid=IL:en |
CAAqJggKIiBDQkFTRWdvSUwyMHZNRFp1ZEdvU0FtVnVHZ0pWVXlnQVAB |
| Science | CAAqJggKIiBDQkFTRWdvSUwyMHZNRFp0Y1RjU0FtVnVHZ0pWVXlnQVAB |
| Health | CAAqIQgKIhtDQkFTRGdvSUwyMHZNR3QwTlRFU0FtVnVLQUFQAQ |
| Exact phrase (URL-encode the quotes) |
q=%22climate+change%22 |
when:7d | Time filter — last N days/hours | q=Bitcoin+when:7d |
when:1h | Time filter — last 1 hour | q=breaking+news+when:1h |
after:YYYY-MM-DD | Articles after a date | q=Olympics+after:2024-07-01 |
before:YYYY-MM-DD | Articles before a date | q=Olympics+before:2024-08-15 |
site: | Restrict to a domain | q=AI+site:reuters.com |
HTML snippet containing an ordered list (<ol>) of related/clustered articles with links and publisher names |
<source url="..."> | Publisher name and homepage URL |
User-Agent: MyNewsBot/1.0 (contact@example.com). Some environments may get blocked without one.