fastmcp by jezweb/claude-skills
npx skills add https://github.com/jezweb/claude-skills --skill fastmcpFastMCP 是一个用于构建模型上下文协议(MCP)服务器的 Python 框架,该框架向 Claude 等大型语言模型暴露工具、资源和提示。此技能提供了经过生产测试的模式、错误预防和部署策略,用于构建健壮的 MCP 服务器。
pip install fastmcp
# 或
uv pip install fastmcp
from fastmcp import FastMCP
# 对于 FastMCP Cloud,必须在模块级别
mcp = FastMCP("我的服务器")
@mcp.tool()
async def hello(name: str) -> str:
"""向某人问好。"""
return f"你好,{name}!"
if __name__ == "__main__":
mcp.run()
运行它:
# 本地开发
python server.py
# 使用 FastMCP CLI
fastmcp dev server.py
# HTTP 模式
python server.py --transport http --port 8000
auth_route 参数$ref 解析、OAuth 代理验证、OpenAPI 3.1 支持广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
ctx.sample() 现在接受工具,用于智能体工作流AnthropicSamplingHandler 从实验性状态提升ctx.sample_step() 用于返回 SampleStep 的单个 LLM 调用task=True;无需阻塞即可进行进度跟踪⚠️ 破坏性变更(v2.14.0):
BearerAuthProvider 模块(使用 JWTVerifier 或 OAuthProxy)Context.get_http_request() 方法fastmcp.Image 顶级导入(使用 from fastmcp.utilities import Image)enable_docket、enable_tasks 设置(始终启用)run_streamable_http_async()、sse_app()、streamable_http_app()、run_sse_async() 方法dependencies 参数output_schema=False 支持FASTMCP_SERVER_ 环境变量前缀已知兼容性:
⚠️ 重大破坏性变更 - FastMCP 3.0 是一次完整的架构重构。
所有组件现在都通过提供者获取:
FileSystemProvider - 从具有热重载功能的目录中发现装饰函数
SkillsProvider - 将智能体技能文件暴露为 MCP 资源
OpenAPIProvider - 从 OpenAPI 规范自动生成
ProxyProvider - 代理到远程 MCP 服务器
from fastmcp import FastMCP from fastmcp.providers import FileSystemProvider
mcp = FastMCP("server") mcp.add_provider(FileSystemProvider(path="./tools", reload=True))
在不更改源代码的情况下修改组件:
命名空间、重命名、按版本过滤
ResourcesAsTools - 将资源暴露为工具
PromptsAsTools - 将提示暴露为工具
from fastmcp.transforms import Namespace, VersionFilter
mcp.add_transform(Namespace(prefix="api")) mcp.add_transform(VersionFilter(min_version="2.0"))
@mcp.tool(version="2.0")
async def fetch_data(query: str) -> dict:
# 默认情况下,客户端看到最高版本
# 可以请求特定版本
return {"data": [...]}
@mcp.tool()
async def set_preference(key: str, value: str, ctx: Context) -> dict:
await ctx.set_state(key, value) # 在会话期间持久化
return {"saved": True}
@mcp.tool()
async def get_preference(key: str, ctx: Context) -> dict:
value = await ctx.get_state(key, default=None)
return {"value": value}
--reload 标志,用于开发期间自动重启@tool(auth=require_scopes("admin"))如果尚未准备好,请固定到 v2 版本:
# requirements.txt
fastmcp<3
对于大多数服务器,只需更新导入即可:
# v2.x 和 v3.0 兼容
from fastmcp import FastMCP
mcp = FastMCP("server")
# ... 其余代码工作方式相同
参见:官方迁移指南
LLM 可以调用的函数。最佳实践:清晰的名称、全面的文档字符串(LLM 会读取这些!)、强类型提示(Pydantic 验证)、结构化返回、错误处理。
@mcp.tool()
async def async_tool(url: str) -> dict: # 对 I/O 使用 async
async with httpx.AsyncClient() as client:
return (await client.get(url)).json()
向 LLM 暴露数据。URI 方案:data://、file://、resource://、info://、api:// 或自定义方案。
@mcp.resource("user://{user_id}/profile") # 带参数的模板
async def get_user(user_id: str) -> dict: # 关键:参数名称必须匹配
return await fetch_user_from_db(user_id)
带有参数的预配置提示。
@mcp.prompt("analyze")
def analyze_prompt(topic: str) -> str:
return f"分析 {topic},考虑:状态、挑战、机遇、建议。"
注入 Context 参数(带有类型提示!)以使用高级特性:
引导(用户输入):
from fastmcp import Context
@mcp.tool()
async def confirm_action(action: str, context: Context) -> dict:
confirmed = await context.request_elicitation(prompt=f"确认 {action}?", response_type=str)
return {"status": "completed" if confirmed.lower() == "yes" else "cancelled"}
进度跟踪:
@mcp.tool()
async def batch_import(file_path: str, context: Context) -> dict:
data = await read_file(file_path)
for i, item in enumerate(data):
await context.report_progress(i + 1, len(data), f"导入中 {i + 1}/{len(data)}")
await import_item(item)
return {"imported": len(data)}
采样(从工具调用 LLM):
@mcp.tool()
async def enhance_text(text: str, context: Context) -> str:
response = await context.request_sampling(
messages=[{"role": "user", "content": f"增强:{text}"}],
temperature=0.7
)
return response["content"]
长时间运行的操作,无需阻塞客户端即可报告进度。使用 Docket 任务调度器(在 v2.14.0+ 中始终启用)。
基本用法:
@mcp.tool(task=True) # 启用后台任务模式
async def analyze_large_dataset(dataset_id: str, context: Context) -> dict:
"""使用进度跟踪分析大型数据集。"""
data = await fetch_dataset(dataset_id)
for i, chunk in enumerate(data.chunks):
# 向客户端报告进度
await context.report_progress(
current=i + 1,
total=len(data.chunks),
message=f"处理块 {i + 1}/{len(data.chunks)}"
)
await process_chunk(chunk)
return {"status": "complete", "records_processed": len(data)}
任务状态: pending → running → completed / failed / cancelled
何时使用:
已知限制(v2.14.x):
ctx.report_progress() 的 statusMessage 在后台任务轮询期间不会转发给客户端(GitHub Issue #2904)mcp>=1.10.0)代替 FastMCP重要提示: 任务通过 Docket 调度器执行。无法通过代理执行任务(将引发错误)。
服务器可以将工具传递给 ctx.sample(),用于智能体工作流,其中 LLM 可以在采样期间调用工具。
智能体采样:
from fastmcp import Context
from fastmcp.sampling import AnthropicSamplingHandler
# 配置采样处理器
mcp = FastMCP("智能体服务器")
mcp.add_sampling_handler(AnthropicSamplingHandler(api_key=os.getenv("ANTHROPIC_API_KEY")))
@mcp.tool()
async def research_topic(topic: str, context: Context) -> dict:
"""使用带有工具的智能体采样研究一个主题。"""
# 定义采样期间可用的工具
research_tools = [
{
"name": "search_web",
"description": "在网络上搜索信息",
"inputSchema": {"type": "object", "properties": {"query": {"type": "string"}}}
},
{
"name": "fetch_url",
"description": "从 URL 获取内容",
"inputSchema": {"type": "object", "properties": {"url": {"type": "string"}}}
}
]
# 使用工具进行采样 - LLM 可以在推理过程中调用这些工具
result = await context.sample(
messages=[{"role": "user", "content": f"研究:{topic}"}],
tools=research_tools,
max_tokens=4096
)
return {"research": result.content, "tools_used": result.tool_calls}
单步采样:
@mcp.tool()
async def get_single_response(prompt: str, context: Context) -> dict:
"""获取单个 LLM 响应,无需工具循环。"""
# sample_step() 返回用于检查的 SampleStep
step = await context.sample_step(
messages=[{"role": "user", "content": prompt}],
temperature=0.7
)
return {
"content": step.content,
"model": step.model,
"stop_reason": step.stop_reason
}
采样处理器:
AnthropicSamplingHandler - 用于 Claude 模型(v2.14.1+)OpenAISamplingHandler - 用于 GPT 模型已知限制: 当客户端连接到单个服务器时,ctx.sample() 可以工作,但当客户端配置了多个服务器时,会失败并显示“Sampling not supported”错误。没有采样的工具工作正常。(社区发现)
基于 py-key-value-aio 构建,用于 OAuth 令牌、响应缓存、持久状态。
可用后端:
FernetEncryptionWrapper 加密、平台感知(Mac/Windows 默认)基本用法:
from key_value.stores import DiskStore, RedisStore
from key_value.encryption import FernetEncryptionWrapper
from cryptography.fernet import Fernet
# 磁盘(持久性,单实例)
mcp = FastMCP("Server", storage=DiskStore(path="/app/data/storage"))
# Redis(分布式,生产环境)
mcp = FastMCP("Server", storage=RedisStore(
host=os.getenv("REDIS_HOST"), password=os.getenv("REDIS_PASSWORD")
))
# 加密存储(推荐)
mcp = FastMCP("Server", storage=FernetEncryptionWrapper(
key_value=DiskStore(path="/app/data"),
fernet=Fernet(os.getenv("STORAGE_ENCRYPTION_KEY"))
))
平台默认值: Mac/Windows 使用磁盘,Linux 使用内存。使用 storage 参数覆盖。
⚠️ v2.13.0 中的破坏性变更: 生命周期行为从每个会话更改为每个服务器实例。
每个服务器(而非每个会话)初始化/清理资源一次 - 对于数据库连接、API 客户端至关重要。
from contextlib import asynccontextmanager
from dataclasses import dataclass
@dataclass
class AppContext:
db: Database
api_client: httpx.AsyncClient
@asynccontextmanager
async def app_lifespan(server: FastMCP):
"""每个服务器实例运行一次。"""
db = await Database.connect(os.getenv("DATABASE_URL"))
api_client = httpx.AsyncClient(base_url=os.getenv("API_BASE_URL"), timeout=30.0)
try:
yield AppContext(db=db, api_client=api_client)
finally:
await db.disconnect()
await api_client.aclose()
mcp = FastMCP("Server", lifespan=app_lifespan)
# 在工具中访问
@mcp.tool()
async def query_db(sql: str, context: Context) -> list:
app_ctx = context.fastmcp_context.lifespan_context
return await app_ctx.db.query(sql)
ASGI 集成(FastAPI/Starlette):
mcp = FastMCP("Server", lifespan=mcp_lifespan)
app = FastAPI(lifespan=mcp.lifespan) # ✅ 必须传递 lifespan!
状态管理:
context.fastmcp_context.set_state(key, value) # 存储
context.fastmcp_context.get_state(key, default=None) # 检索
8 种内置类型: TimingMiddleware、ResponseCachingMiddleware、LoggingMiddleware、RateLimitingMiddleware、ErrorHandlingMiddleware、ToolInjectionMiddleware、PromptToolMiddleware、ResourceToolMiddleware
执行顺序(顺序很重要!):
请求流程:
→ ErrorHandlingMiddleware(捕获错误)
→ TimingMiddleware(启动计时器)
→ LoggingMiddleware(记录请求)
→ RateLimitingMiddleware(检查速率限制)
→ ResponseCachingMiddleware(检查缓存)
→ 工具/资源处理器
基本用法:
from fastmcp.middleware import ErrorHandlingMiddleware, TimingMiddleware, LoggingMiddleware
mcp.add_middleware(ErrorHandlingMiddleware()) # 第一:捕获错误
mcp.add_middleware(TimingMiddleware()) # 第二:计时请求
mcp.add_middleware(LoggingMiddleware(level="INFO"))
mcp.add_middleware(RateLimitingMiddleware(max_requests=100, window_seconds=60))
mcp.add_middleware(ResponseCachingMiddleware(ttl_seconds=300, storage=RedisStore()))
自定义中间件:
from fastmcp.middleware import BaseMiddleware
class AccessControlMiddleware(BaseMiddleware):
async def on_call_tool(self, tool_name, arguments, context):
user = context.fastmcp_context.get_state("user_id")
if user not in self.allowed_users:
raise PermissionError(f"用户未授权")
return await self.next(tool_name, arguments, context)
钩子层次结构: on_message(全部)→ on_request/on_notification → on_call_tool/on_read_resource/on_get_prompt → on_list_*(列表操作)
两种策略:
import_server() - 静态快照:导入时一次性复制,更改不会传播,快速(无运行时委托)。用于:最终确定的组件包。
mount() - 动态链接:实时运行时链接,更改立即可见,运行时委托(较慢)。用于:模块化运行时组合。
基本用法:
# 导入(静态)
main_server.import_server(api_server) # 一次性复制
# 挂载(动态)
main_server.mount(api_server, prefix="api") # 工具:api.fetch_data
main_server.mount(db_server, prefix="db") # 资源:resource://db/path
标签过滤:
@api_server.tool(tags=["public"])
def public_api(): pass
main_server.import_server(api_server, include_tags=["public"]) # 仅公共
main_server.mount(api_server, prefix="api", exclude_tags=["admin"]) # 无管理员
资源前缀格式:
路径(自 v2.4.0 起默认):resource://prefix/path
协议(旧版):prefix+resource://path
main_server.mount(subserver, prefix="api", resource_prefix_format="path")
4 种身份验证模式:
JWTVerifier):验证外部令牌RemoteAuthProvider):具有 DCR 的 OAuth 2.0/OIDCOAuthProxy):桥接到没有 DCR 的提供者(GitHub、Google、Azure、AWS、Discord、Facebook)OAuthProvider):完整的授权服务器模式 1:令牌验证
from fastmcp.auth import JWTVerifier
auth = JWTVerifier(issuer="https://auth.example.com", audience="my-server",
public_key=os.getenv("JWT_PUBLIC_KEY"))
mcp = FastMCP("Server", auth=auth)
模式 3:OAuth 代理(生产环境)
from fastmcp.auth import OAuthProxy
from key_value.stores import RedisStore
from key_value.encryption import FernetEncryptionWrapper
from cryptography.fernet import Fernet
auth = OAuthProxy(
jwt_signing_key=os.environ["JWT_SIGNING_KEY"],
client_storage=FernetEncryptionWrapper(
key_value=RedisStore(host=os.getenv("REDIS_HOST"), password=os.getenv("REDIS_PASSWORD")),
fernet=Fernet(os.environ["STORAGE_ENCRYPTION_KEY"])
),
upstream_authorization_endpoint="https://github.com/login/oauth/authorize",
upstream_token_endpoint="https://github.com/login/oauth/access_token",
upstream_client_id=os.getenv("GITHUB_CLIENT_ID"),
upstream_client_secret=os.getenv("GITHUB_CLIENT_SECRET"),
enable_consent_screen=True # 关键:防止混淆代理攻击
)
mcp = FastMCP("GitHub Auth", auth=auth)
OAuth 代理特性: 令牌工厂模式(签发自己的 JWT)、同意屏幕(防止绕过)、PKCE 支持、RFC 7662 令牌内省
支持的提供者: GitHub、Google、Azure、AWS Cognito、Discord、Facebook、WorkOS、AuthKit、Descope、Scalekit、OCI(v2.13.1)
Supabase 提供者(v2.14.2+):
from fastmcp.auth import SupabaseProvider
auth = SupabaseProvider(
auth_route="/custom-auth", # 自定义身份验证路由(v2.14.2 新增)
# ... 其他配置
)
图标: 添加到服务器、工具、资源、提示。使用 Icon(url, size),通过 Icon.from_file() 或 Image.to_data_uri() 使用数据 URI(v2.13.1)。
API 集成(3 种模式):
httpx.AsyncClientFastMCP.from_openapi(spec, client, route_maps) - GET→资源/模板,POST/PUT/DELETE→工具FastMCP.from_fastapi(app, httpx_client_kwargs)云部署关键要求:
mcp、server 或 app# ✅ 正确:模块级导出
mcp = FastMCP("server") # 在模块级别!
# ❌ 错误:函数包装
def create_server():
return FastMCP("server") # 对云来说太晚了!
部署: https://fastmcp.cloud → 登录 → 创建项目 → 选择仓库 → 部署
客户端配置(Claude Desktop):
{"mcpServers": {"my-server": {"url": "https://project.fastmcp.app/mcp", "transport": "http"}}}
错误: RuntimeError: No server object found at module level 原因: 服务器未在模块级别导出(FastMCP Cloud 要求) 解决方案: 在模块级别使用 mcp = FastMCP("server"),而不是在函数内部
错误: RuntimeError: no running event loop、TypeError: object coroutine can't be used in 'await' 原因: 错误地混合了同步/异步 解决方案: 对于带有 await 的工具使用 async def,对于非异步代码使用同步 def
错误: TypeError: missing 1 required positional argument: 'context' 原因: 缺少 Context 类型注解 解决方案: async def tool(context: Context) - 需要类型提示!
错误: ValueError: Invalid resource URI: missing scheme 原因: 资源 URI 缺少方案前缀 解决方案: 使用 @mcp.resource("data://config") 而不是 @mcp.resource("config")
错误: TypeError: get_user() missing 1 required positional argument 原因: 函数参数名称与 URI 模板不匹配 解决方案: @mcp.resource("user://{user_id}/profile") → def get_user(user_id: str) - 名称必须完全匹配
错误: ValidationError: value is not a valid integer 原因: 类型提示与提供的数据不匹配 解决方案: 使用 Pydantic 模型:class Params(BaseModel): query: str = Field(min_length=1)
错误: ConnectionError: Server using different transport 原因: 客户端和服务器使用不兼容的传输方式 解决方案: 匹配传输方式 - stdio:mcp.run() + {"command": "python", "args": ["server.py"]},HTTP:mcp.run(transport="http", port=8000) + {"url": "http://localhost:8000/mcp", "transport": "http"}
HTTP 超时问题(在 v2.14.3 中修复):
错误: ModuleNotFoundError: No module named 'my_package' 原因: 包未正确安装 解决方案: pip install -e . 或使用绝对导入或 export PYTHONPATH="/path/to/project"
错误: DeprecationWarning: 'mcp.settings' is deprecated 原因: 使用旧的 FastMCP v1 API 解决方案: 使用 os.getenv("API_KEY") 而不是 mcp.settings.get("API_KEY")
错误: OSError: [Errno 48] Address already in use 原因: 端口 8000 已被占用 解决方案: 使用不同的端口 --port 8001 或终止进程 lsof -ti:8000 | xargs kill -9
错误: TypeError: Object of type 'ndarray' is not JSON serializable 原因: 不支持的类型提示(NumPy 数组、自定义类) 解决方案: 返回 JSON 兼容类型:list[float] 或转换:{"values": np_array.tolist()}
不支持自定义类(社区发现): FastMCP 支持所有 Pydantic 兼容类型,但自定义类必须转换为字典或 Pydantic 模型才能用于工具返回:
# ❌ 不支持
class MyCustomClass:
def __init__(self, value: str):
self.value = value
@mcp.tool()
async def get_custom() -> MyCustomClass:
return MyCustomClass("test") # 序列化错误
# ✅ 支持 - 使用 dict 或 Pydantic
@mcp.tool()
async def get_custom() -> dict[str, str]:
obj = MyCustomClass("test")
return {"value": obj.value}
# 或使用 Pydantic BaseModel
from pydantic import BaseModel
class MyModel(BaseModel):
value: str
@mcp.tool()
async def get_model() -> MyModel:
return MyModel(value="test") # 有效!
OutputSchema $ref 解析(在 v2.14.2 中修复):
outputSchema 中的根级 $ref 未被解引用(GitHub Issue #2720)错误: TypeError: Object of type 'datetime' is not JSON serializable 原因: 返回非 JSON 可序列化对象 解决方案: 转换:datetime.now().isoformat(),字节:.decode('utf-8')
错误: ImportError: cannot import name 'X' from partially initialized module 原因: 循环依赖(云部署中常见) 解决方案: 在 __init__.py 中使用直接导入:from .api_client import APIClient 或在函数中使用延迟导入
错误: DeprecationWarning: datetime.utcnow() is deprecated 原因: 使用已弃用的 Python 3.12+ 方法 解决方案: 使用 datetime.now(timezone.utc) 而不是 datetime.utcnow()
错误: RuntimeError: Event loop is closed 原因: 在模块导入时创建异步资源 解决方案: 使用延迟初始化 - 创建带有异步 connect() 方法的连接类,在工具中需要时调用
错误: RuntimeError: OAuth tokens lost on restart、ValueError: Cache not persisting 原因: 在生产环境中使用默认的内存存储而没有持久性 解决方案: 使用加密的 DiskStore(单实例)或带有 FernetEncryptionWrapper 的 RedisStore(多实例)
错误: RuntimeError: Database connection never initialized、Warning: MCP lifespan hooks not running 原因: FastMCP 与 FastAPI/Starlette 一起使用但没有传递 lifespan(v2.13.0 要求) 解决方案: app = FastAPI(lifespan=mcp.lifespan) - 必须传递 lifespan!
错误: RuntimeError: Rate limit not checked before caching 原因: 中间件顺序不正确(顺序很重要!) 解决方案: ErrorHandling → Timing → Logging → RateLimiting → ResponseCaching(此顺序)
错误: RecursionError: maximum recursion depth exceeded 原因: 中间件未调用 self.next() 或调用不正确 解决方案: 始终在中间件钩子中调用 result = await self.next(tool_name, arguments, context)
错误: RuntimeError: Subserver changes not reflected、ValueError: Unexpected tool namespacing 原因: 需要 mount() 时使用了 import_server()(或反之) 解决方案: import_server() 用于静态包(一次性复制),mount() 用于动态组合(实时链接)
错误: ValueError: Resource not found: resource://api/users 原因: 使用了错误的资源前缀格式 解决方案: 路径格式(v2.4.0+ 默认):resource://prefix/path,协议(旧版):prefix+resource://path - 使用 resource_prefix_format="path" 设置
错误: SecurityWarning: Authorization bypass possible 原因: OAuth 代理没有同意屏幕(安全漏洞) 解决方案: 始终设置 enable_consent_screen=True - 防止混淆代理攻击(关键)
错误: ValueError: JWT signing key required for OAuth Proxy 原因: OAuth 代理缺少 jwt_signing_key 解决方案: 生成:secrets.token_urlsafe(32),存储在 FASTMCP_JWT_SIGNING_KEY 环境变量中,传递给 OAuthProxy(jwt_signing_key=...)
错误: ValueError: Invalid data URI format 原因: 图标的数据 URI 格式不正确 解决方案: 使用 Icon.from_file("/path/icon.png", size="medium") 或 Image.to_data_uri()(v2.13.1)- 不要手动格式化
错误: Warning: Lifespan runs per-server, not per-session 原因: 在 v2.13.0+(每个服务器)中期望 v2.12 行为(每个会话) 解决方案: v2.13.0+ 生命周期每个服务器运行一次,而不是每个会话 - 使用中间件处理每个会话的逻辑
错误: ImportError: cannot import name 'BearerAuthProvider' from 'fastmcp.auth' 原因: BearerAuthProvider 模块在 v2.14.0 中已移除 解决方案: 使用 JWTVerifier 进行令牌验证或使用 OAuthProxy 进行完整的 OAuth 流程:
# 之前(v2.13.x)
from fastmcp.auth import BearerAuthProvider
# 之后(v2.14.0+)
from fastmcp.auth import JWTVerifier
auth = JWTVerifier(issuer="...", audience="...", public_key="...")
错误: AttributeError: 'Context' object has no attribute 'get_http_request' 原因: Context.get_http_request() 方法在 v2.14.0 中已移除 解决方案: 通过中间件访问请求信息或使用暴露给中间件的 InitializeResult
错误: ImportError: cannot import name 'Image' from 'fastmcp' 原因: fastmcp.Image 顶级导入在 v2.14.0 中已移除 解决方案: 使用新的导入路径:
# 之前(v2.13.x)
from fastmcp import Image
FastMCP is a Python framework for building Model Context Protocol (MCP) servers that expose tools, resources, and prompts to Large Language Models like Claude. This skill provides production-tested patterns, error prevention, and deployment strategies for building robust MCP servers.
pip install fastmcp
# or
uv pip install fastmcp
from fastmcp import FastMCP
# MUST be at module level for FastMCP Cloud
mcp = FastMCP("My Server")
@mcp.tool()
async def hello(name: str) -> str:
"""Say hello to someone."""
return f"Hello, {name}!"
if __name__ == "__main__":
mcp.run()
Run it:
# Local development
python server.py
# With FastMCP CLI
fastmcp dev server.py
# HTTP mode
python server.py --transport http --port 8000
auth_route parameter$ref resolution, OAuth Proxy validation, OpenAPI 3.1 supportctx.sample() now accepts tools for agentic workflowsAnthropicSamplingHandler promoted from experimentalctx.sample_step() for single LLM call returning SampleSteptask=True to async decorators; progress tracking without blocking⚠️ Breaking Changes (v2.14.0):
BearerAuthProvider module removed (use JWTVerifier or OAuthProxy)Context.get_http_request() method removedfastmcp.Image top-level import removed (use from fastmcp.utilities import Image)enable_docket, enable_tasks settings removed (always enabled)run_streamable_http_async(), sse_app(), , methods removedKnown Compatibility:
⚠️ MAJOR BREAKING CHANGES - FastMCP 3.0 is a complete architectural refactor.
All components now sourced via Providers :
FileSystemProvider - Discover decorated functions from directories with hot-reload
SkillsProvider - Expose agent skill files as MCP resources
OpenAPIProvider - Auto-generate from OpenAPI specs
ProxyProvider - Proxy to remote MCP servers
from fastmcp import FastMCP from fastmcp.providers import FileSystemProvider
mcp = FastMCP("server") mcp.add_provider(FileSystemProvider(path="./tools", reload=True))
Modify components without changing source code:
Namespace, rename, filter by version
ResourcesAsTools - Expose resources as tools
PromptsAsTools - Expose prompts as tools
from fastmcp.transforms import Namespace, VersionFilter
mcp.add_transform(Namespace(prefix="api")) mcp.add_transform(VersionFilter(min_version="2.0"))
@mcp.tool(version="2.0")
async def fetch_data(query: str) -> dict:
# Clients see highest version by default
# Can request specific version
return {"data": [...]}
@mcp.tool()
async def set_preference(key: str, value: str, ctx: Context) -> dict:
await ctx.set_state(key, value) # Persists across session
return {"saved": True}
@mcp.tool()
async def get_preference(key: str, ctx: Context) -> dict:
value = await ctx.get_state(key, default=None)
return {"value": value}
--reload flag for auto-restart during development@tool(auth=require_scopes("admin"))Pin to v2 if not ready :
# requirements.txt
fastmcp<3
For most servers , updating the import is all you need:
# v2.x and v3.0 compatible
from fastmcp import FastMCP
mcp = FastMCP("server")
# ... rest of code works the same
See : Official Migration Guide
Functions LLMs can call. Best practices: Clear names, comprehensive docstrings (LLMs read these!), strong type hints (Pydantic validates), structured returns, error handling.
@mcp.tool()
async def async_tool(url: str) -> dict: # Use async for I/O
async with httpx.AsyncClient() as client:
return (await client.get(url)).json()
Expose data to LLMs. URI schemes: data://, file://, resource://, info://, api://, or custom.
@mcp.resource("user://{user_id}/profile") # Template with parameters
async def get_user(user_id: str) -> dict: # CRITICAL: param names must match
return await fetch_user_from_db(user_id)
Pre-configured prompts with parameters.
@mcp.prompt("analyze")
def analyze_prompt(topic: str) -> str:
return f"Analyze {topic} considering: state, challenges, opportunities, recommendations."
Inject Context parameter (with type hint!) for advanced features:
Elicitation (User Input):
from fastmcp import Context
@mcp.tool()
async def confirm_action(action: str, context: Context) -> dict:
confirmed = await context.request_elicitation(prompt=f"Confirm {action}?", response_type=str)
return {"status": "completed" if confirmed.lower() == "yes" else "cancelled"}
Progress Tracking:
@mcp.tool()
async def batch_import(file_path: str, context: Context) -> dict:
data = await read_file(file_path)
for i, item in enumerate(data):
await context.report_progress(i + 1, len(data), f"Importing {i + 1}/{len(data)}")
await import_item(item)
return {"imported": len(data)}
Sampling (LLM calls from tools):
@mcp.tool()
async def enhance_text(text: str, context: Context) -> str:
response = await context.request_sampling(
messages=[{"role": "user", "content": f"Enhance: {text}"}],
temperature=0.7
)
return response["content"]
Long-running operations that report progress without blocking clients. Uses Docket task scheduler (always enabled in v2.14.0+).
Basic Usage:
@mcp.tool(task=True) # Enable background task mode
async def analyze_large_dataset(dataset_id: str, context: Context) -> dict:
"""Analyze large dataset with progress tracking."""
data = await fetch_dataset(dataset_id)
for i, chunk in enumerate(data.chunks):
# Report progress to client
await context.report_progress(
current=i + 1,
total=len(data.chunks),
message=f"Processing chunk {i + 1}/{len(data.chunks)}"
)
await process_chunk(chunk)
return {"status": "complete", "records_processed": len(data)}
Task States: pending → running → completed / failed / cancelled
When to Use:
Known Limitation (v2.14.x) :
statusMessage from ctx.report_progress() is not forwarded to clients during background task polling (GitHub Issue #2904)mcp>=1.10.0) instead of FastMCP for nowImportant: Tasks execute through Docket scheduler. Cannot execute tasks through proxies (will raise error).
Servers can pass tools to ctx.sample() for agentic workflows where the LLM can call tools during sampling.
Agentic Sampling:
from fastmcp import Context
from fastmcp.sampling import AnthropicSamplingHandler
# Configure sampling handler
mcp = FastMCP("Agent Server")
mcp.add_sampling_handler(AnthropicSamplingHandler(api_key=os.getenv("ANTHROPIC_API_KEY")))
@mcp.tool()
async def research_topic(topic: str, context: Context) -> dict:
"""Research a topic using agentic sampling with tools."""
# Define tools available during sampling
research_tools = [
{
"name": "search_web",
"description": "Search the web for information",
"inputSchema": {"type": "object", "properties": {"query": {"type": "string"}}}
},
{
"name": "fetch_url",
"description": "Fetch content from a URL",
"inputSchema": {"type": "object", "properties": {"url": {"type": "string"}}}
}
]
# Sample with tools - LLM can call these tools during reasoning
result = await context.sample(
messages=[{"role": "user", "content": f"Research: {topic}"}],
tools=research_tools,
max_tokens=4096
)
return {"research": result.content, "tools_used": result.tool_calls}
Single-Step Sampling:
@mcp.tool()
async def get_single_response(prompt: str, context: Context) -> dict:
"""Get a single LLM response without tool loop."""
# sample_step() returns SampleStep for inspection
step = await context.sample_step(
messages=[{"role": "user", "content": prompt}],
temperature=0.7
)
return {
"content": step.content,
"model": step.model,
"stop_reason": step.stop_reason
}
Sampling Handlers:
AnthropicSamplingHandler - For Claude models (v2.14.1+)OpenAISamplingHandler - For GPT modelsKnown Limitation : ctx.sample() works when client connects to a single server but fails with "Sampling not supported" error when multiple servers are configured in client. Tools without sampling work fine. (Community-sourced finding)
Built on py-key-value-aio for OAuth tokens, response caching, persistent state.
Available Backends:
FernetEncryptionWrapper, platform-aware (Mac/Windows default)Basic Usage:
from key_value.stores import DiskStore, RedisStore
from key_value.encryption import FernetEncryptionWrapper
from cryptography.fernet import Fernet
# Disk (persistent, single instance)
mcp = FastMCP("Server", storage=DiskStore(path="/app/data/storage"))
# Redis (distributed, production)
mcp = FastMCP("Server", storage=RedisStore(
host=os.getenv("REDIS_HOST"), password=os.getenv("REDIS_PASSWORD")
))
# Encrypted storage (recommended)
mcp = FastMCP("Server", storage=FernetEncryptionWrapper(
key_value=DiskStore(path="/app/data"),
fernet=Fernet(os.getenv("STORAGE_ENCRYPTION_KEY"))
))
Platform Defaults: Mac/Windows use Disk, Linux uses Memory. Override with storage parameter.
⚠️ Breaking Change in v2.13.0 : Lifespan behavior changed from per-session to per-server-instance.
Initialize/cleanup resources once per server (NOT per session) - critical for DB connections, API clients.
from contextlib import asynccontextmanager
from dataclasses import dataclass
@dataclass
class AppContext:
db: Database
api_client: httpx.AsyncClient
@asynccontextmanager
async def app_lifespan(server: FastMCP):
"""Runs ONCE per server instance."""
db = await Database.connect(os.getenv("DATABASE_URL"))
api_client = httpx.AsyncClient(base_url=os.getenv("API_BASE_URL"), timeout=30.0)
try:
yield AppContext(db=db, api_client=api_client)
finally:
await db.disconnect()
await api_client.aclose()
mcp = FastMCP("Server", lifespan=app_lifespan)
# Access in tools
@mcp.tool()
async def query_db(sql: str, context: Context) -> list:
app_ctx = context.fastmcp_context.lifespan_context
return await app_ctx.db.query(sql)
ASGI Integration (FastAPI/Starlette):
mcp = FastMCP("Server", lifespan=mcp_lifespan)
app = FastAPI(lifespan=mcp.lifespan) # ✅ MUST pass lifespan!
State Management:
context.fastmcp_context.set_state(key, value) # Store
context.fastmcp_context.get_state(key, default=None) # Retrieve
8 Built-in Types: TimingMiddleware, ResponseCachingMiddleware, LoggingMiddleware, RateLimitingMiddleware, ErrorHandlingMiddleware, ToolInjectionMiddleware, PromptToolMiddleware, ResourceToolMiddleware
Execution Order (order matters!):
Request Flow:
→ ErrorHandlingMiddleware (catches errors)
→ TimingMiddleware (starts timer)
→ LoggingMiddleware (logs request)
→ RateLimitingMiddleware (checks rate limit)
→ ResponseCachingMiddleware (checks cache)
→ Tool/Resource Handler
Basic Usage:
from fastmcp.middleware import ErrorHandlingMiddleware, TimingMiddleware, LoggingMiddleware
mcp.add_middleware(ErrorHandlingMiddleware()) # First: catch errors
mcp.add_middleware(TimingMiddleware()) # Second: time requests
mcp.add_middleware(LoggingMiddleware(level="INFO"))
mcp.add_middleware(RateLimitingMiddleware(max_requests=100, window_seconds=60))
mcp.add_middleware(ResponseCachingMiddleware(ttl_seconds=300, storage=RedisStore()))
Custom Middleware:
from fastmcp.middleware import BaseMiddleware
class AccessControlMiddleware(BaseMiddleware):
async def on_call_tool(self, tool_name, arguments, context):
user = context.fastmcp_context.get_state("user_id")
if user not in self.allowed_users:
raise PermissionError(f"User not authorized")
return await self.next(tool_name, arguments, context)
Hook Hierarchy: on_message (all) → on_request/on_notification → on_call_tool/on_read_resource/on_get_prompt → on_list_* (list operations)
Two Strategies:
import_server() - Static snapshot: One-time copy at import, changes don't propagate, fast (no runtime delegation). Use for: Finalized component bundles.
mount() - Dynamic link: Live runtime link, changes immediately visible, runtime delegation (slower). Use for: Modular runtime composition.
Basic Usage:
# Import (static)
main_server.import_server(api_server) # One-time copy
# Mount (dynamic)
main_server.mount(api_server, prefix="api") # Tools: api.fetch_data
main_server.mount(db_server, prefix="db") # Resources: resource://db/path
Tag Filtering:
@api_server.tool(tags=["public"])
def public_api(): pass
main_server.import_server(api_server, include_tags=["public"]) # Only public
main_server.mount(api_server, prefix="api", exclude_tags=["admin"]) # No admin
Resource Prefix Formats:
Path (default since v2.4.0): resource://prefix/path
Protocol (legacy): prefix+resource://path
main_server.mount(subserver, prefix="api", resource_prefix_format="path")
4 Authentication Patterns:
JWTVerifier): Validate external tokensRemoteAuthProvider): OAuth 2.0/OIDC with DCROAuthProxy): Bridge to providers without DCR (GitHub, Google, Azure, AWS, Discord, Facebook)OAuthProvider): Complete authorization serverPattern 1: Token Validation
from fastmcp.auth import JWTVerifier
auth = JWTVerifier(issuer="https://auth.example.com", audience="my-server",
public_key=os.getenv("JWT_PUBLIC_KEY"))
mcp = FastMCP("Server", auth=auth)
Pattern 3: OAuth Proxy (Production)
from fastmcp.auth import OAuthProxy
from key_value.stores import RedisStore
from key_value.encryption import FernetEncryptionWrapper
from cryptography.fernet import Fernet
auth = OAuthProxy(
jwt_signing_key=os.environ["JWT_SIGNING_KEY"],
client_storage=FernetEncryptionWrapper(
key_value=RedisStore(host=os.getenv("REDIS_HOST"), password=os.getenv("REDIS_PASSWORD")),
fernet=Fernet(os.environ["STORAGE_ENCRYPTION_KEY"])
),
upstream_authorization_endpoint="https://github.com/login/oauth/authorize",
upstream_token_endpoint="https://github.com/login/oauth/access_token",
upstream_client_id=os.getenv("GITHUB_CLIENT_ID"),
upstream_client_secret=os.getenv("GITHUB_CLIENT_SECRET"),
enable_consent_screen=True # CRITICAL: Prevents confused deputy attacks
)
mcp = FastMCP("GitHub Auth", auth=auth)
OAuth Proxy Features: Token factory pattern (issues own JWTs), consent screens (prevents bypass), PKCE support, RFC 7662 token introspection
Supported Providers: GitHub, Google, Azure, AWS Cognito, Discord, Facebook, WorkOS, AuthKit, Descope, Scalekit, OCI (v2.13.1)
Supabase Provider (v2.14.2+):
from fastmcp.auth import SupabaseProvider
auth = SupabaseProvider(
auth_route="/custom-auth", # Custom auth route (new in v2.14.2)
# ... other config
)
Icons: Add to servers, tools, resources, prompts. Use Icon(url, size), data URIs via Icon.from_file() or Image.to_data_uri() (v2.13.1).
API Integration (3 Patterns):
httpx.AsyncClient with base_url/headers/timeoutFastMCP.from_openapi(spec, client, route_maps) - GET→Resources/Templates, POST/PUT/DELETE→ToolsFastMCP.from_fastapi(app, httpx_client_kwargs)Cloud Deployment Critical Requirements:
mcp, server, or app# ✅ CORRECT: Module-level export
mcp = FastMCP("server") # At module level!
# ❌ WRONG: Function-wrapped
def create_server():
return FastMCP("server") # Too late for cloud!
Deployment: https://fastmcp.cloud → Sign in → Create Project → Select repo → Deploy
Client Config (Claude Desktop):
{"mcpServers": {"my-server": {"url": "https://project.fastmcp.app/mcp", "transport": "http"}}}
Error: RuntimeError: No server object found at module level Cause: Server not exported at module level (FastMCP Cloud requirement) Solution: mcp = FastMCP("server") at module level, not inside functions
Error: RuntimeError: no running event loop, TypeError: object coroutine can't be used in 'await' Cause: Mixing sync/async incorrectly Solution: Use async def for tools with await, sync def for non-async code
Error: TypeError: missing 1 required positional argument: 'context' Cause: Missing Context type annotation Solution: async def tool(context: Context) - type hint required!
Error: ValueError: Invalid resource URI: missing scheme Cause: Resource URI missing scheme prefix Solution: Use @mcp.resource("data://config") not @mcp.resource("config")
Error: TypeError: get_user() missing 1 required positional argument Cause: Function parameter names don't match URI template Solution: @mcp.resource("user://{user_id}/profile") → def get_user(user_id: str) - names must match exactly
Error: ValidationError: value is not a valid integer Cause: Type hints don't match provided data Solution: Use Pydantic models: class Params(BaseModel): query: str = Field(min_length=1)
Error: ConnectionError: Server using different transport Cause: Client and server using incompatible transports Solution: Match transports - stdio: mcp.run() + {"command": "python", "args": ["server.py"]}, HTTP: mcp.run(transport="http", port=8000) + {"url": "http://localhost:8000/mcp", "transport": "http"}
HTTP Timeout Issue (Fixed in v2.14.3) :
Error: ModuleNotFoundError: No module named 'my_package' Cause: Package not properly installed Solution: pip install -e . or use absolute imports or export PYTHONPATH="/path/to/project"
Error: DeprecationWarning: 'mcp.settings' is deprecated Cause: Using old FastMCP v1 API Solution: Use os.getenv("API_KEY") instead of mcp.settings.get("API_KEY")
Error: OSError: [Errno 48] Address already in use Cause: Port 8000 already occupied Solution: Use different port --port 8001 or kill process lsof -ti:8000 | xargs kill -9
Error: TypeError: Object of type 'ndarray' is not JSON serializable Cause: Unsupported type hints (NumPy arrays, custom classes) Solution: Return JSON-compatible types: list[float] or convert: {"values": np_array.tolist()}
Custom Classes Not Supported (Community-sourced) : FastMCP supports all Pydantic-compatible types, but custom classes must be converted to dictionaries or Pydantic models for tool returns:
# ❌ NOT SUPPORTED
class MyCustomClass:
def __init__(self, value: str):
self.value = value
@mcp.tool()
async def get_custom() -> MyCustomClass:
return MyCustomClass("test") # Serialization error
# ✅ SUPPORTED - Use dict or Pydantic
@mcp.tool()
async def get_custom() -> dict[str, str]:
obj = MyCustomClass("test")
return {"value": obj.value}
# OR use Pydantic BaseModel
from pydantic import BaseModel
class MyModel(BaseModel):
value: str
@mcp.tool()
async def get_model() -> MyModel:
return MyModel(value="test") # Works!
OutputSchema $ref Resolution (Fixed in v2.14.2) :
$ref in outputSchema wasn't being dereferenced (GitHub Issue #2720)Error: TypeError: Object of type 'datetime' is not JSON serializable Cause: Returning non-JSON-serializable objects Solution: Convert: datetime.now().isoformat(), bytes: .decode('utf-8')
Error: ImportError: cannot import name 'X' from partially initialized module Cause: Circular dependency (common in cloud deployment) Solution: Use direct imports in __init__.py: from .api_client import APIClient or lazy imports in functions
Error: DeprecationWarning: datetime.utcnow() is deprecated Cause: Using deprecated Python 3.12+ methods Solution: Use datetime.now(timezone.utc) instead of datetime.utcnow()
Error: RuntimeError: Event loop is closed Cause: Creating async resources at module import time Solution: Use lazy initialization - create connection class with async connect() method, call when needed in tools
Error: RuntimeError: OAuth tokens lost on restart, ValueError: Cache not persisting Cause: Using default memory storage in production without persistence Solution: Use encrypted DiskStore (single instance) or RedisStore (multi-instance) with FernetEncryptionWrapper
Error: RuntimeError: Database connection never initialized, Warning: MCP lifespan hooks not running Cause: FastMCP with FastAPI/Starlette without passing lifespan (v2.13.0 requirement) Solution: app = FastAPI(lifespan=mcp.lifespan) - MUST pass lifespan!
Error: RuntimeError: Rate limit not checked before caching Cause: Incorrect middleware ordering (order matters!) Solution: ErrorHandling → Timing → Logging → RateLimiting → ResponseCaching (this order)
Error: RecursionError: maximum recursion depth exceeded Cause: Middleware not calling self.next() or calling incorrectly Solution: Always call result = await self.next(tool_name, arguments, context) in middleware hooks
Error: RuntimeError: Subserver changes not reflected, ValueError: Unexpected tool namespacing Cause: Using import_server() when mount() was needed (or vice versa) Solution: import_server() for static bundles (one-time copy), mount() for dynamic composition (live link)
Error: ValueError: Resource not found: resource://api/users Cause: Using wrong resource prefix format Solution: Path format (default v2.4.0+): resource://prefix/path, Protocol (legacy): prefix+resource://path - set with resource_prefix_format="path"
Error: SecurityWarning: Authorization bypass possible Cause: OAuth Proxy without consent screen (security vulnerability) Solution: Always set enable_consent_screen=True - prevents confused deputy attacks (CRITICAL)
Error: ValueError: JWT signing key required for OAuth Proxy Cause: OAuth Proxy missing jwt_signing_key Solution: Generate: secrets.token_urlsafe(32), store in FASTMCP_JWT_SIGNING_KEY env var, pass to OAuthProxy(jwt_signing_key=...)
Error: ValueError: Invalid data URI format Cause: Incorrectly formatted data URI for icons Solution: Use Icon.from_file("/path/icon.png", size="medium") or Image.to_data_uri() (v2.13.1) - don't manually format
Error: Warning: Lifespan runs per-server, not per-session Cause: Expecting v2.12 behavior (per-session) in v2.13.0+ (per-server) Solution: v2.13.0+ lifespans run ONCE per server, not per session - use middleware for per-session logic
Error: ImportError: cannot import name 'BearerAuthProvider' from 'fastmcp.auth' Cause: BearerAuthProvider module removed in v2.14.0 Solution: Use JWTVerifier for token validation or OAuthProxy for full OAuth flows:
# Before (v2.13.x)
from fastmcp.auth import BearerAuthProvider
# After (v2.14.0+)
from fastmcp.auth import JWTVerifier
auth = JWTVerifier(issuer="...", audience="...", public_key="...")
Error: AttributeError: 'Context' object has no attribute 'get_http_request' Cause: Context.get_http_request() method removed in v2.14.0 Solution: Access request info through middleware or use InitializeResult exposed to middleware
Error: ImportError: cannot import name 'Image' from 'fastmcp' Cause: fastmcp.Image top-level import removed in v2.14.0 Solution: Use new import path:
# Before (v2.13.x)
from fastmcp import Image
# After (v2.14.0+)
from fastmcp.utilities import Image
Error: Client can't connect to /mcp endpoint, gets 404 Source: GitHub Issue #2961 Cause: Mounting FastMCP at /mcp creates endpoint at /mcp/mcp due to path prefix duplication Solution: Mount at root / or adjust client config
# ❌ WRONG - Creates /mcp/mcp endpoint
from fastapi import FastAPI
from fastmcp import FastMCP
mcp = FastMCP("server")
app = FastAPI(lifespan=mcp.lifespan)
app.mount("/mcp", mcp) # Endpoint becomes /mcp/mcp
# ✅ CORRECT - Mount at root
app.mount("/", mcp) # Endpoint is /mcp
# ✅ OR adjust client config
# In claude_desktop_config.json:
{"url": "http://localhost:8000/mcp/mcp", "transport": "http"}
Critical : Must also pass lifespan=mcp.lifespan to FastAPI (see Error #17).
Error: RuntimeError: No active context found Source: GitHub Issue #2877 Cause: ContextVar propagation issue when FastMCP mounted in FastAPI/Starlette with background tasks (task=True) Solution: Upgrade to fastmcp>=2.14.3
# In v2.14.2 and earlier - FAILS
from fastapi import FastAPI
from fastmcp import FastMCP, Context
mcp = FastMCP("server")
app = FastAPI(lifespan=mcp.lifespan)
@mcp.tool(task=True)
async def sample(name: str, ctx: Context) -> dict:
# RuntimeError: No active context found
await ctx.report_progress(1, 1, "Processing")
return {"status": "OK"}
app.mount("/", mcp)
# ✅ FIXED in v2.14.3
# pip install fastmcp>=2.14.3
Note : Related to Error #17 (Lifespan Not Passed to ASGI App).
4 Production Patterns:
utils.py with Config class, format_success/error helpershttpx.AsyncClient with get_client() class methodretry_with_backoff(func, max_retries=3, initial_delay=1.0, exponential_base=2.0)TimeBasedCache(ttl=300) with .get() and .set() methodsTesting:
pytest + create_test_client(test_server) + await client.call_tool()Client("server.py") + list_tools() + call_tool() + list_resources()CLI Commands:
fastmcp dev server.py # Run with inspector
fastmcp install server.py # Install to Claude Desktop
FASTMCP_LOG_LEVEL=DEBUG fastmcp dev # Debug logging
Best Practices: Factory pattern with module-level export, environment config with validation, comprehensive docstrings (LLMs read these!), health check resources
Project Structure:
server.py, requirements.txt, .env, README.mdsrc/ (server.py, utils.py, tools/, resources/, prompts/), tests/, pyproject.tomlOfficial: https://github.com/jlowin/fastmcp, https://fastmcp.cloud, https://modelcontextprotocol.io, Context7: /jlowin/fastmcp Related Skills: openai-api, claude-api, cloudflare-worker-base, typescript-mcp Package Versions: fastmcp>=2.14.2 (PyPI), Python>=3.10 (3.13 supported in v2.14.1+), httpx, pydantic, py-key-value-aio, cryptography Last Updated : 2026-01-21
17 Key Takeaways:
import_server() (static) vs mount() (dynamic)fastmcp dev)task=True)ctx.sample(tools=[...]))Production Readiness: Encrypted storage, 4 auth patterns, 8 middleware types, modular composition, OAuth security (consent screens, PKCE, RFC 7662), response caching, connection pooling, timing middleware, background tasks, agentic sampling, FastAPI/Starlette mounting, v3.0 provider architecture
Prevents 30+ errors. 90-95% token savings.
Weekly Installs
405
Repository
GitHub Stars
652
First Seen
Jan 20, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
claude-code333
opencode284
gemini-cli275
codex252
cursor242
antigravity233
AI 代码实施计划编写技能 | 自动化开发任务分解与 TDD 流程规划工具
41,400 周安装
ralphinho-rfc-pipeline:AI驱动的复杂功能分解与多单元编排工作流工具
850 周安装
Chrome Bridge 自动化:Midscene 真实浏览器自动化工具,保留登录状态
657 周安装
后端开发技能指南:现代技术选型、API设计、安全与性能优化最佳实践
682 周安装
Rust错误处理最佳实践指南:Result、Option、panic!与thiserror/anyhow使用技巧
653 周安装
Valyu API 最佳实践指南:搜索、内容提取、AI 答案与深度研究
840 周安装
Windows UI自动化专家指南:安全自动化、UIA框架与无障碍合规
700 周安装
streamable_http_app()run_sse_async()dependencies parameter removed from decoratorsoutput_schema=False support eliminatedFASTMCP_SERVER_ environment variable prefix deprecated