mcp-builder by composiohq/awesome-claude-skills
npx skills add https://github.com/composiohq/awesome-claude-skills --skill mcp-builder要创建高质量的 MCP(模型上下文协议)服务器,使 LLM 能够有效地与外部服务交互,请使用此技能。MCP 服务器提供允许 LLM 访问外部服务和 API 的工具。MCP 服务器的质量取决于其通过提供的工具使 LLM 完成现实世界任务的能力。
创建高质量的 MCP 服务器涉及四个主要阶段:
在深入实现之前,请通过回顾以下原则来了解如何为 AI 智能体设计工具:
为工作流而构建,而不仅仅是 API 端点:
schedule_event)为有限上下文优化:
设计可操作的错误消息:
遵循自然的任务细分:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
使用评估驱动开发:
获取最新的 MCP 协议文档:
使用 WebFetch 加载:https://modelcontextprotocol.io/llms-full.txt
这份全面的文档包含了完整的 MCP 规范和指南。
加载并阅读以下参考文件:
对于 Python 实现,还需加载:
https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md对于 Node/TypeScript 实现,还需加载:
https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md要集成服务,请通读所有可用的 API 文档:
要收集全面的信息,请根据需要使用网络搜索和 WebFetch 工具。
根据您的研究,创建一个详细的计划,包括:
工具选择:
共享实用程序和辅助函数:
输入/输出设计:
错误处理策略:
既然您已经有了全面的计划,请开始按照语言特定的最佳实践进行实施。
对于 Python:
.py 文件,如果复杂则组织成模块(参见 🐍 Python 指南)对于 Node/TypeScript:
package.json 和 tsconfig.json开始实施时,在实现工具之前创建共享实用程序:
对于计划中的每个工具:
定义输入模式:
编写全面的文档字符串/描述:
实现工具逻辑:
添加工具注解:
readOnlyHint: true(对于只读操作)destructiveHint: false(对于非破坏性操作)idempotentHint: true(如果重复调用具有相同效果)openWorldHint: true(如果与外部系统交互)此时,加载相应的语言指南:
对于 Python:加载 🐍 Python 实现指南 并确保以下事项:
model_config 的 Pydantic v2 模型对于 Node/TypeScript:加载 ⚡ TypeScript 实现指南 并确保以下事项:
server.registerTool.strict() 的 Zod 模式any 类型——使用正确的类型npm run build)初始实施后:
为确保质量,审查代码的以下方面:
重要提示: MCP 服务器是长时间运行的进程,通过 stdio/stdin 或 sse/http 等待请求。在您的主进程中直接运行它们(例如 python server.py 或 node dist/index.js)将导致您的进程无限期挂起。
安全测试服务器的方法:
timeout 5s python server.py对于 Python:
python -m py_compile your_server.py对于 Node/TypeScript:
npm run build 并确保完成无误要验证实施质量,请从语言特定指南中加载相应的检查清单:
在实现 MCP 服务器后,创建全面的评估以测试其有效性。
加载 ✅ 评估指南 以获取完整的评估指南。
评估测试 LLM 是否能有效使用您的 MCP 服务器来回答真实、复杂的问题。
要创建有效的评估,请遵循评估指南中概述的流程:
每个问题必须:
创建具有以下结构的 XML 文件:
<evaluation>
<qa_pair>
<question>查找关于使用动物代号命名的 AI 模型发布的讨论。其中一个模型需要一个特定格式为 ASL-X 的安全指定。对于那个以斑点野猫命名的模型,正在确定的是哪个数字 X?</question>
<answer>3</answer>
</qa_pair>
<!-- 更多 qa_pair... -->
</evaluation>
在开发过程中根据需要加载这些资源:
https://modelcontextprotocol.io/llms-full.txt 获取 - 完整的 MCP 规范https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md 获取https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md 获取🐍 Python 实现指南 - 完整的 Python/FastMCP 指南,包含:
@mcp.tool 进行工具注册⚡ TypeScript 实现指南 - 完整的 TypeScript 指南,包含:
server.registerTool 进行工具注册每周安装量
708
仓库
GitHub 星标
42.3K
首次出现
Jan 20, 2026
安全审计
安装于
opencode608
gemini-cli562
codex544
cursor528
claude-code521
github-copilot485
To create high-quality MCP (Model Context Protocol) servers that enable LLMs to effectively interact with external services, use this skill. An MCP server provides tools that allow LLMs to access external services and APIs. The quality of an MCP server is measured by how well it enables LLMs to accomplish real-world tasks using the tools provided.
Creating a high-quality MCP server involves four main phases:
Before diving into implementation, understand how to design tools for AI agents by reviewing these principles:
Build for Workflows, Not Just API Endpoints:
schedule_event that both checks availability and creates event)Optimize for Limited Context:
Design Actionable Error Messages:
Follow Natural Task Subdivisions:
Use Evaluation-Driven Development:
Fetch the latest MCP protocol documentation:
Use WebFetch to load: https://modelcontextprotocol.io/llms-full.txt
This comprehensive document contains the complete MCP specification and guidelines.
Load and read the following reference files:
For Python implementations, also load:
https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.mdFor Node/TypeScript implementations, also load:
https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.mdTo integrate a service, read through ALL available API documentation:
To gather comprehensive information, use web search and the WebFetch tool as needed.
Based on your research, create a detailed plan that includes:
Tool Selection:
Shared Utilities and Helpers:
Input/Output Design:
Error Handling Strategy:
Now that you have a comprehensive plan, begin implementation following language-specific best practices.
For Python:
.py file or organize into modules if complex (see 🐍 Python Guide)For Node/TypeScript:
package.json and tsconfig.jsonTo begin implementation, create shared utilities before implementing tools:
For each tool in the plan:
Define Input Schema:
Write Comprehensive Docstrings/Descriptions:
Implement Tool Logic:
Add Tool Annotations:
readOnlyHint: true (for read-only operations)destructiveHint: false (for non-destructive operations)idempotentHint: true (if repeated calls have same effect)openWorldHint: true (if interacting with external systems)At this point, load the appropriate language guide:
For Python: Load🐍 Python Implementation Guide and ensure the following:
model_configFor Node/TypeScript: Load⚡ TypeScript Implementation Guide and ensure the following:
server.registerTool properly.strict()any types - use proper typesnpm run build)After initial implementation:
To ensure quality, review the code for:
Important: MCP servers are long-running processes that wait for requests over stdio/stdin or sse/http. Running them directly in your main process (e.g., python server.py or node dist/index.js) will cause your process to hang indefinitely.
Safe ways to test the server:
timeout 5s python server.pyFor Python:
python -m py_compile your_server.pyFor Node/TypeScript:
npm run build and ensure it completes without errorsTo verify implementation quality, load the appropriate checklist from the language-specific guide:
After implementing your MCP server, create comprehensive evaluations to test its effectiveness.
Load✅ Evaluation Guide for complete evaluation guidelines.
Evaluations test whether LLMs can effectively use your MCP server to answer realistic, complex questions.
To create effective evaluations, follow the process outlined in the evaluation guide:
Each question must be:
Create an XML file with this structure:
<evaluation>
<qa_pair>
<question>Find discussions about AI model launches with animal codenames. One model needed a specific safety designation that uses the format ASL-X. What number X was being determined for the model named after a spotted wild cat?</question>
<answer>3</answer>
</qa_pair>
<!-- More qa_pairs... -->
</evaluation>
Load these resources as needed during development:
https://modelcontextprotocol.io/llms-full.txt - Complete MCP specificationhttps://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.mdhttps://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md🐍 Python Implementation Guide - Complete Python/FastMCP guide with:
@mcp.tool⚡ TypeScript Implementation Guide - Complete TypeScript guide with:
server.registerToolWeekly Installs
708
Repository
GitHub Stars
42.3K
First Seen
Jan 20, 2026
Security Audits
Gen Agent Trust HubWarnSocketPassSnykWarn
Installed on
opencode608
gemini-cli562
codex544
cursor528
claude-code521
github-copilot485
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
102,200 周安装
SwiftUI 开发模式指南:状态管理、视图组合与导航最佳实践
937 周安装
Ant Design 最佳实践指南:React 组件库使用决策、主题配置与性能优化
937 周安装
Grimoire CLI 使用指南:区块链法术编写、验证与执行全流程
940 周安装
Grimoire Uniswap 技能:查询 Uniswap 元数据与生成代币/资金池快照的 CLI 工具
940 周安装
Grimoire Aave 技能:查询 Aave V3 元数据和储备快照的 CLI 工具
941 周安装
Railway CLI 部署指南:使用 railway up 命令快速部署代码到 Railway 平台
942 周安装