mcp%3Abuild-mcp by neolabhq/context-engineering-kit
npx skills add https://github.com/neolabhq/context-engineering-kit --skill mcp:build-mcp要创建高质量的 MCP(模型上下文协议)服务器,使 LLM 能够有效地与外部服务交互,请使用此技能。MCP 服务器提供工具,允许 LLM 访问外部服务和 API。MCP 服务器的质量取决于其提供的工具能多好地帮助 LLM 完成现实世界任务。
创建高质量的 MCP 服务器涉及四个主要阶段:
在开始实施之前,请通过回顾以下原则来理解如何为 AI 智能体设计工具:
为工作流程而构建,而非仅为 API 端点:
schedule_event 既能检查可用性又能创建事件)为有限上下文进行优化:
设计可操作的错误消息:
遵循自然的任务细分:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
使用评估驱动的开发:
获取最新的 MCP 协议文档:
使用 WebFetch 加载:https://modelcontextprotocol.io/llms-full.txt
这份综合文档包含了完整的 MCP 规范和指南。
加载并阅读以下参考文件:
对于 Python 实现,还需加载:
https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md对于 Node/TypeScript 实现,还需加载:
https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md要集成服务,请通读所有可用的 API 文档:
要收集全面的信息,请根据需要使用网络搜索和 WebFetch 工具。
根据您的研究,创建一个详细的计划,包括:
工具选择:
共享实用程序和助手:
输入/输出设计:
错误处理策略:
现在您已经有了一个全面的计划,请开始按照语言特定的最佳实践进行实施。
对于 Python:
.py 文件,如果复杂则组织成模块(参见 🐍 Python 指南)对于 Node/TypeScript:
package.json 和 tsconfig.json开始实施时,先创建共享实用程序,再实施工具:
对于计划中的每个工具:
定义输入模式:
编写全面的文档字符串/描述:
实施工具逻辑:
添加工具注解:
readOnlyHint: true(对于只读操作)destructiveHint: false(对于非破坏性操作)idempotentHint: true(如果重复调用具有相同效果)openWorldHint: true(如果与外部系统交互)此时,加载相应的语言指南:
对于 Python:加载 🐍 Python 实现指南 并确保以下内容:
model_config 的 Pydantic v2 模型对于 Node/TypeScript:加载 ⚡ TypeScript 实现指南 并确保以下内容:
server.registerTool.strict() 的 Zod 模式any 类型 - 使用适当的类型npm run build)初始实施后:
为确保质量,请审查代码的以下方面:
重要提示: MCP 服务器是长时间运行的进程,通过 stdio/stdin 或 sse/http 等待请求。在主进程中直接运行它们(例如 python server.py 或 node dist/index.js)将导致您的进程无限期挂起。
安全测试服务器的方法:
timeout 5s python server.py对于 Python:
python -m py_compile your_server.py对于 Node/TypeScript:
npm run build 并确保其完成且无错误要验证实施质量,请从语言特定指南中加载相应的检查清单:
在实施 MCP 服务器后,创建全面的评估以测试其有效性。
加载 ✅ 评估指南 以获取完整的评估指南。
评估测试 LLM 是否能有效地使用您的 MCP 服务器来回答现实、复杂的问题。
要创建有效的评估,请遵循评估指南中概述的流程:
每个问题必须:
创建一个具有以下结构的 XML 文件:
<evaluation>
<qa_pair>
<question>查找关于使用动物代号命名的 AI 模型发布的讨论。其中一个模型需要一个特定格式为 ASL-X 的安全指定。对于那个以斑点野猫命名的模型,正在确定的数字 X 是多少?</question>
<answer>3</answer>
</qa_pair>
<!-- 更多 qa_pairs... -->
</evaluation>
在开发过程中根据需要加载这些资源:
https://modelcontextprotocol.io/llms-full.txt 获取 - 完整的 MCP 规范https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md 获取https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md 获取🐍 Python 实现指南 - 完整的 Python/FastMCP 指南,包含:
@mcp.tool 进行工具注册⚡ TypeScript 实现指南 - 完整的 TypeScript 指南,包含:
server.registerTool 进行工具注册每周安装次数
192
仓库
GitHub 星标数
699
首次出现
2026年2月19日
安装于
opencode185
github-copilot184
codex184
gemini-cli183
cursor182
kimi-cli181
To create high-quality MCP (Model Context Protocol) servers that enable LLMs to effectively interact with external services, use this skill. An MCP server provides tools that allow LLMs to access external services and APIs. The quality of an MCP server is measured by how well it enables LLMs to accomplish real-world tasks using the tools provided.
Creating a high-quality MCP server involves four main phases:
Before diving into implementation, understand how to design tools for AI agents by reviewing these principles:
Build for Workflows, Not Just API Endpoints:
schedule_event that both checks availability and creates event)Optimize for Limited Context:
Design Actionable Error Messages:
Follow Natural Task Subdivisions:
Use Evaluation-Driven Development:
Fetch the latest MCP protocol documentation:
Use WebFetch to load: https://modelcontextprotocol.io/llms-full.txt
This comprehensive document contains the complete MCP specification and guidelines.
Load and read the following reference files:
For Python implementations, also load:
https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.mdFor Node/TypeScript implementations, also load:
https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.mdTo integrate a service, read through ALL available API documentation:
To gather comprehensive information, use web search and the WebFetch tool as needed.
Based on your research, create a detailed plan that includes:
Tool Selection:
Shared Utilities and Helpers:
Input/Output Design:
Error Handling Strategy:
Now that you have a comprehensive plan, begin implementation following language-specific best practices.
For Python:
.py file or organize into modules if complex (see 🐍 Python Guide)For Node/TypeScript:
package.json and tsconfig.jsonTo begin implementation, create shared utilities before implementing tools:
For each tool in the plan:
Define Input Schema:
Write Comprehensive Docstrings/Descriptions:
Implement Tool Logic:
Add Tool Annotations:
readOnlyHint: true (for read-only operations)destructiveHint: false (for non-destructive operations)idempotentHint: true (if repeated calls have same effect)openWorldHint: true (if interacting with external systems)At this point, load the appropriate language guide:
For Python: Load🐍 Python Implementation Guide and ensure the following:
model_configFor Node/TypeScript: Load⚡ TypeScript Implementation Guide and ensure the following:
server.registerTool properly.strict()any types - use proper typesnpm run build)After initial implementation:
To ensure quality, review the code for:
Important: MCP servers are long-running processes that wait for requests over stdio/stdin or sse/http. Running them directly in your main process (e.g., python server.py or node dist/index.js) will cause your process to hang indefinitely.
Safe ways to test the server:
timeout 5s python server.pyFor Python:
python -m py_compile your_server.pyFor Node/TypeScript:
npm run build and ensure it completes without errorsTo verify implementation quality, load the appropriate checklist from the language-specific guide:
After implementing your MCP server, create comprehensive evaluations to test its effectiveness.
Load✅ Evaluation Guide for complete evaluation guidelines.
Evaluations test whether LLMs can effectively use your MCP server to answer realistic, complex questions.
To create effective evaluations, follow the process outlined in the evaluation guide:
Each question must be:
Create an XML file with this structure:
<evaluation>
<qa_pair>
<question>Find discussions about AI model launches with animal codenames. One model needed a specific safety designation that uses the format ASL-X. What number X was being determined for the model named after a spotted wild cat?</question>
<answer>3</answer>
</qa_pair>
<!-- More qa_pairs... -->
</evaluation>
Load these resources as needed during development:
https://modelcontextprotocol.io/llms-full.txt - Complete MCP specificationhttps://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.mdhttps://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md🐍 Python Implementation Guide - Complete Python/FastMCP guide with:
@mcp.tool⚡ TypeScript Implementation Guide - Complete TypeScript guide with:
server.registerToolWeekly Installs
192
Repository
GitHub Stars
699
First Seen
Feb 19, 2026
Installed on
opencode185
github-copilot184
codex184
gemini-cli183
cursor182
kimi-cli181
agent-browser 浏览器自动化工具 - Vercel Labs 命令行网页操作与测试
157,400 周安装