mcp-builder by 4444j99/a-i--skills
npx skills add https://github.com/4444j99/a-i--skills --skill mcp-builder要创建高质量的 MCP(模型上下文协议)服务器,使 LLM 能够有效地与外部服务交互,请使用此技能。MCP 服务器提供允许 LLM 访问外部服务和 API 的工具。MCP 服务器的质量取决于其使用提供的工具帮助 LLM 完成现实世界任务的能力。
创建高质量的 MCP 服务器涉及四个主要阶段:
在深入实施之前,请通过回顾以下原则来了解如何为 AI 智能体设计工具:
为工作流程而构建,而不仅仅是 API 端点:
schedule_event)针对有限上下文进行优化:
设计可操作的错误消息:
遵循自然的任务划分:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
使用评估驱动的开发:
获取最新的 MCP 协议文档:
使用 WebFetch 加载:https://modelcontextprotocol.io/llms-full.txt
这份综合文档包含完整的 MCP 规范和指南。
加载并阅读以下参考文件:
对于 Python 实现,还需加载:
https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md对于 Node/TypeScript 实现,还需加载:
https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md要集成服务,请阅读所有可用的 API 文档:
要收集全面的信息,请根据需要使用网络搜索和 WebFetch 工具。
根据您的研究,创建一个详细的计划,包括:
工具选择:
共享工具和辅助函数:
输入/输出设计:
错误处理策略:
既然您已经有了全面的计划,请开始按照特定语言的最佳实践进行实施。
对于 Python:
.py 文件,如果复杂则组织成模块(参见 🐍 Python 指南)对于 Node/TypeScript:
package.json 和 tsconfig.json开始实施时,在实施工具之前先创建共享工具:
对于计划中的每个工具:
定义输入模式:
编写全面的文档字符串/描述:
实施工具逻辑:
添加工具注解:
readOnlyHint: true(用于只读操作)destructiveHint: false(用于非破坏性操作)idempotentHint: true(如果重复调用具有相同效果)openWorldHint: true(如果与外部系统交互)此时,加载相应的语言指南:
对于 Python:加载 🐍 Python 实现指南 并确保以下内容:
model_config 的 Pydantic v2 模型对于 Node/TypeScript:加载 ⚡ TypeScript 实现指南 并确保以下内容:
server.registerTool.strict() 的 Zod 模式any 类型 - 使用正确的类型npm run build)初步实施后:
为确保质量,请审查代码的以下方面:
重要提示: MCP 服务器是长时间运行的进程,通过 stdio/stdin 或 sse/http 等待请求。在主进程中直接运行它们(例如 python server.py 或 node dist/index.js)将导致您的进程无限期挂起。
安全测试服务器的方法:
timeout 5s python server.py对于 Python:
python -m py_compile your_server.py对于 Node/TypeScript:
npm run build 并确保其完成且无错误要验证实施质量,请从特定语言指南中加载相应的检查清单:
在实施 MCP 服务器后,创建全面的评估以测试其有效性。
加载 ✅ 评估指南 以获取完整的评估指南。
评估测试 LLM 是否能有效使用您的 MCP 服务器来回答现实、复杂的问题。
要创建有效的评估,请遵循评估指南中概述的流程:
每个问题必须:
创建具有以下结构的 XML 文件:
<evaluation>
<qa_pair>
<question>查找关于使用动物代号命名的 AI 模型发布的讨论。其中一个模型需要一个特定格式为 ASL-X 的安全指定。对于那个以斑点野猫命名的模型,正在确定的是哪个数字 X?</question>
<answer>3</answer>
</qa_pair>
<!-- 更多 qa_pairs... -->
</evaluation>
在开发过程中根据需要加载这些资源:
https://modelcontextprotocol.io/llms-full.txt 获取 - 完整的 MCP 规范https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md 获取https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md 获取🐍 Python 实现指南 - 完整的 Python/FastMCP 指南,包含:
@mcp.tool 进行工具注册⚡ TypeScript 实现指南 - 完整的 TypeScript 指南,包含:
server.registerTool 进行工具注册每周安装数
1
代码仓库
GitHub 星标数
2
首次出现
1 天前
安全审计
安装于
zencoder1
amp1
cline1
openclaw1
opencode1
cursor1
To create high-quality MCP (Model Context Protocol) servers that enable LLMs to effectively interact with external services, use this skill. An MCP server provides tools that allow LLMs to access external services and APIs. The quality of an MCP server is measured by how well it enables LLMs to accomplish real-world tasks using the tools provided.
Creating a high-quality MCP server involves four main phases:
Before diving into implementation, understand how to design tools for AI agents by reviewing these principles:
Build for Workflows, Not Just API Endpoints:
schedule_event that both checks availability and creates event)Optimize for Limited Context:
Design Actionable Error Messages:
Follow Natural Task Subdivisions:
Use Evaluation-Driven Development:
Fetch the latest MCP protocol documentation:
Use WebFetch to load: https://modelcontextprotocol.io/llms-full.txt
This comprehensive document contains the complete MCP specification and guidelines.
Load and read the following reference files:
For Python implementations, also load:
https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.mdFor Node/TypeScript implementations, also load:
https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.mdTo integrate a service, read through ALL available API documentation:
To gather comprehensive information, use web search and the WebFetch tool as needed.
Based on your research, create a detailed plan that includes:
Tool Selection:
Shared Utilities and Helpers:
Input/Output Design:
Error Handling Strategy:
Now that you have a comprehensive plan, begin implementation following language-specific best practices.
For Python:
.py file or organize into modules if complex (see 🐍 Python Guide)For Node/TypeScript:
package.json and tsconfig.jsonTo begin implementation, create shared utilities before implementing tools:
For each tool in the plan:
Define Input Schema:
Write Comprehensive Docstrings/Descriptions:
Implement Tool Logic:
Add Tool Annotations:
readOnlyHint: true (for read-only operations)destructiveHint: false (for non-destructive operations)idempotentHint: true (if repeated calls have same effect)openWorldHint: true (if interacting with external systems)At this point, load the appropriate language guide:
For Python: Load🐍 Python Implementation Guide and ensure the following:
model_configFor Node/TypeScript: Load⚡ TypeScript Implementation Guide and ensure the following:
server.registerTool properly.strict()any types - use proper typesnpm run build)After initial implementation:
To ensure quality, review the code for:
Important: MCP servers are long-running processes that wait for requests over stdio/stdin or sse/http. Running them directly in your main process (e.g., python server.py or node dist/index.js) will cause your process to hang indefinitely.
Safe ways to test the server:
timeout 5s python server.pyFor Python:
python -m py_compile your_server.pyFor Node/TypeScript:
npm run build and ensure it completes without errorsTo verify implementation quality, load the appropriate checklist from the language-specific guide:
After implementing your MCP server, create comprehensive evaluations to test its effectiveness.
Load✅ Evaluation Guide for complete evaluation guidelines.
Evaluations test whether LLMs can effectively use your MCP server to answer realistic, complex questions.
To create effective evaluations, follow the process outlined in the evaluation guide:
Each question must be:
Create an XML file with this structure:
<evaluation>
<qa_pair>
<question>Find discussions about AI model launches with animal codenames. One model needed a specific safety designation that uses the format ASL-X. What number X was being determined for the model named after a spotted wild cat?</question>
<answer>3</answer>
</qa_pair>
<!-- More qa_pairs... -->
</evaluation>
Load these resources as needed during development:
https://modelcontextprotocol.io/llms-full.txt - Complete MCP specificationhttps://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.mdhttps://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md🐍 Python Implementation Guide - Complete Python/FastMCP guide with:
@mcp.tool⚡ TypeScript Implementation Guide - Complete TypeScript guide with:
server.registerToolWeekly Installs
1
Repository
GitHub Stars
2
First Seen
1 day ago
Security Audits
Gen Agent Trust HubPassSocketPassSnykFail
Installed on
zencoder1
amp1
cline1
openclaw1
opencode1
cursor1
agent-browser 浏览器自动化工具 - Vercel Labs 命令行网页操作与测试
150,000 周安装
SEO 网站审核与关键词研究工具:全面分析、内容差距识别与竞争对手比较
206 周安装
Zotero论文阅读器 - 自动从Zotero文献库读取PDF并转换为Markdown格式的学术工具
206 周安装
视频营销策略与脚本创作指南:短视频长视频优化、吸引钩子技巧、平台SEO
206 周安装
customaize-agent:create-command - 创建与管理AI助手命令的元命令工具
206 周安装
sentence-transformers:开源句子嵌入框架,支持100+语言,RAG与语义搜索的最佳选择
206 周安装
PocketBase 最佳实践指南:42条关键规则,涵盖集合设计、API安全与性能优化
206 周安装