tavily-best-practices by tavily-ai/skills
npx skills add https://github.com/tavily-ai/skills --skill tavily-best-practicesTavily 是一个专为 LLM 设计的搜索 API,使 AI 应用能够访问实时网络数据。
Python:
pip install tavily-python
JavaScript:
npm install @tavily/core
完整的 SDK 参考请参见 references/sdk.md。
from tavily import TavilyClient
# 使用 TAVILY_API_KEY 环境变量(推荐)
client = TavilyClient()
# 带项目跟踪(用于使用情况组织)
client = TavilyClient(project_id="your-project-id")
# 用于并行查询的异步客户端
from tavily import AsyncTavilyClient
async_client = AsyncTavilyClient()
对于自定义智能体/工作流:
| 需求 | 方法 |
|---|---|
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 网络搜索结果 |
search() |
| 特定 URL 的内容 | extract() |
| 整个网站的内容 | crawl() |
| 从网站发现 URL | map() |
对于开箱即用的研究:
| 需求 | 方法 |
|---|---|
| 端到端研究(含 AI 综合) | research() |
response = client.search(
query="quantum computing breakthroughs", # 保持在 400 字符以内
max_results=10,
search_depth="advanced"
)
print(response)
关键参数:query, max_results, search_depth (ultra-fast/fast/basic/advanced), include_domains, exclude_domains, time_range
完整的搜索参考请参见 references/search.md。
# 简单的一步提取
response = client.extract(
urls=["https://docs.example.com"],
extract_depth="advanced"
)
print(response)
关键参数:urls (最多 20 个), extract_depth, query, chunks_per_source (1-5)
完整的提取参考请参见 references/extract.md。
response = client.crawl(
url="https://docs.example.com",
instructions="Find API documentation pages", # 语义焦点
extract_depth="advanced"
)
print(response)
关键参数:url, max_depth, max_breadth, limit, instructions, chunks_per_source, select_paths, exclude_paths
完整的爬取参考请参见 references/crawl.md。
response = client.map(
url="https://docs.example.com"
)
print(response)
import time
# 用于全面的多主题研究
result = client.research(
input="Analyze competitive landscape for X in SMB market",
model="pro" # 或针对聚焦查询使用 "mini",不确定时使用 "auto"
)
request_id = result["request_id"]
# 轮询直到完成
response = client.get_research(request_id)
while response["status"] not in ["completed", "failed"]:
time.sleep(10)
response = client.get_research(request_id)
print(response["content"]) # 研究报告
关键参数:input, model ("mini"/"pro"/"auto"), stream, output_schema, citation_format
完整的研究参考请参见 references/research.md。
关于完整的参数、响应字段、模式和示例:
每周安装量
4.6K
代码仓库
GitHub 星标数
132
首次出现
2026 年 1 月 23 日
安全审计
安装于
opencode4.2K
gemini-cli4.1K
codex4.1K
github-copilot4.0K
kimi-cli4.0K
amp4.0K
Tavily is a search API designed for LLMs, enabling AI applications to access real-time web data.
Python:
pip install tavily-python
JavaScript:
npm install @tavily/core
See references/sdk.md for complete SDK reference.
from tavily import TavilyClient
# Uses TAVILY_API_KEY env var (recommended)
client = TavilyClient()
#With project tracking (for usage organization)
client = TavilyClient(project_id="your-project-id")
# Async client for parallel queries
from tavily import AsyncTavilyClient
async_client = AsyncTavilyClient()
For custom agents/workflows:
| Need | Method |
|---|---|
| Web search results | search() |
| Content from specific URLs | extract() |
| Content from entire site | crawl() |
| URL discovery from site | map() |
For out-of-the-box research:
| Need | Method |
|---|---|
| End-to-end research with AI synthesis | research() |
response = client.search(
query="quantum computing breakthroughs", # Keep under 400 chars
max_results=10,
search_depth="advanced"
)
print(response)
Key parameters: query, max_results, search_depth (ultra-fast/fast/basic/advanced), include_domains, exclude_domains, time_range
See references/search.md for complete search reference.
# Simple one-step extraction
response = client.extract(
urls=["https://docs.example.com"],
extract_depth="advanced"
)
print(response)
Key parameters: urls (max 20), extract_depth, query, chunks_per_source (1-5)
See references/extract.md for complete extract reference.
response = client.crawl(
url="https://docs.example.com",
instructions="Find API documentation pages", # Semantic focus
extract_depth="advanced"
)
print(response)
Key parameters: url, max_depth, max_breadth, limit, instructions, chunks_per_source, select_paths, exclude_paths
See references/crawl.md for complete crawl reference.
response = client.map(
url="https://docs.example.com"
)
print(response)
import time
# For comprehensive multi-topic research
result = client.research(
input="Analyze competitive landscape for X in SMB market",
model="pro" # or "mini" for focused queries, "auto" when unsure
)
request_id = result["request_id"]
# Poll until completed
response = client.get_research(request_id)
while response["status"] not in ["completed", "failed"]:
time.sleep(10)
response = client.get_research(request_id)
print(response["content"]) # The research report
Key parameters: input, model ("mini"/"pro"/"auto"), stream, output_schema, citation_format
See references/research.md for complete research reference.
For complete parameters, response fields, patterns, and examples:
Weekly Installs
4.6K
Repository
GitHub Stars
132
First Seen
Jan 23, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
opencode4.2K
gemini-cli4.1K
codex4.1K
github-copilot4.0K
kimi-cli4.0K
amp4.0K
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
102,200 周安装