golang-grpc by samber/cc-skills-golang
npx skills add https://github.com/samber/cc-skills-golang --skill golang-grpc角色: 你是一名 Go 分布式系统工程师。你为正确性和可操作性设计 gRPC 服务——正确的状态码、截止时间、拦截器和优雅关闭与正常路径同等重要。
模式:
将 gRPC 视为纯粹的传输层——使其与业务逻辑分离。官方的 Go 实现是 google.golang.org/grpc。
此技能并非详尽无遗。更多信息请参考库文档和代码示例。Context7 可以作为一个可发现性平台提供帮助。
| 关注点 | 包 / 工具 |
|---|---|
| 服务定义 | 使用 .proto 文件的 protoc 或 buf |
| 代码生成 | protoc-gen-go, |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
protoc-gen-go-grpc| 错误处理 | 结合 codes 使用 google.golang.org/grpc/status |
| 丰富的错误详情 | google.golang.org/genproto/googleapis/rpc/errdetails |
| 拦截器 | grpc.ChainUnaryInterceptor, grpc.ChainStreamInterceptor |
| 中间件生态系统 | github.com/grpc-ecosystem/go-grpc-middleware |
| 测试 | google.golang.org/grpc/test/bufconn |
| TLS / mTLS | google.golang.org/grpc/credentials |
| 健康检查 | google.golang.org/grpc/health |
按领域组织,使用版本化目录(proto/user/v1/)。始终使用 Request/Response 包装器消息——像 string 这样的裸类型无法在以后添加字段。使用 buf generate 或 protoc 生成。
实现健康检查服务(grpc_health_v1)——Kubernetes 探针需要它来确定就绪状态
使用拦截器处理横切关注点(日志记录、身份验证、恢复)——保持业务逻辑清晰
使用 GracefulStop() 并设置超时后回退到 Stop()——排空进行中的 RPC 同时防止挂起
在生产环境中禁用反射——它会暴露你的完整 API 接口
srv := grpc.NewServer( grpc.ChainUnaryInterceptor(loggingInterceptor, recoveryInterceptor), ) pb.RegisterUserServiceServer(srv, svc) healthpb.RegisterHealthServer(srv, health.NewServer())
go srv.Serve(lis)
// 收到关闭信号时: stopped := make(chan struct{}) go func() { srv.GracefulStop(); close(stopped) }() select { case <-stopped: case <-time.After(15 * time.Second): srv.Stop() }
func loggingInterceptor(ctx context.Context, req any, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (any, error) {
start := time.Now()
resp, err := handler(ctx, req)
log.Printf("method=%s duration=%s code=%s", info.FullMethod, time.Since(start), status.Code(err))
return resp, err
}
复用连接——gRPC 在单个 HTTP/2 连接上复用 RPC;每个请求一个连接会浪费 TCP/TLS 握手
为每个调用设置截止时间(context.WithTimeout)——没有截止时间,缓慢的上游会无限期地挂起 goroutine
通过 dns:/// 方案与无头 Kubernetes 服务一起使用 round_robin
通过 metadata.NewOutgoingContext 传递元数据(身份验证令牌、跟踪 ID)
conn, err := grpc.NewClient("dns:///user-service:50051",
grpc.WithTransportCredentials(creds),
grpc.WithDefaultServiceConfig({ "loadBalancingPolicy": "round_robin", "methodConfig": [{ "name": [{"service": ""}], "timeout": "5s", "retryPolicy": { "maxAttempts": 3, "initialBackoff": "0.1s", "maxBackoff": "1s", "backoffMultiplier": 2, "retryableStatusCodes": ["UNAVAILABLE"] } }] }),
)
client := pb.NewUserServiceClient(conn)
始终使用 status.Error 返回带有特定代码的 gRPC 错误——原始的 error 会变成 codes.Unknown,无法告知客户端任何可操作的信息。客户端使用代码来决定重试、快速失败还是降级。
| 代码 | 何时使用 |
|---|---|
InvalidArgument | 格式错误的输入(缺少字段、格式错误) |
NotFound | 实体不存在 |
AlreadyExists | 创建失败,实体已存在 |
PermissionDenied | 调用者缺乏权限 |
Unauthenticated | 缺少或令牌无效 |
FailedPrecondition | 系统未处于所需状态 |
ResourceExhausted | 速率限制或配额超出 |
Unavailable | 暂时性问题,可以安全重试 |
Internal | 意外错误 |
DeadlineExceeded | 超时 |
// ✗ 不好 — 调用者得到 codes.Unknown,无法决定是否重试
return nil, fmt.Errorf("user not found")
// ✓ 好 — 特定的代码让客户端可以采取适当的行动
if errors.Is(err, ErrNotFound) {
return nil, status.Errorf(codes.NotFound, "user %q not found", req.UserId)
}
return nil, status.Errorf(codes.Internal, "lookup failed: %v", err)
对于字段级验证错误,通过 status.WithDetails 附加 errdetails.BadRequest。
| 模式 | 使用场景 |
|---|---|
| 服务器流式传输 | 服务器发送序列(日志跟踪、结果集) |
| 客户端流式传输 | 客户端发送序列,服务器响应一次(文件上传、批处理) |
| 双向流式传输 | 双方独立发送(聊天、实时同步) |
优先使用流式传输而非大型单条消息——避免每条消息的大小限制并降低内存压力。
func (s *server) ListUsers(req *pb.ListUsersRequest, stream pb.UserService_ListUsersServer) error {
for _, u := range users {
if err := stream.Send(u); err != nil {
return err
}
}
return nil
}
使用 bufconn 进行内存连接测试,它可以执行完整的 gRPC 栈(序列化、拦截器、元数据),而无需网络开销。始终测试错误场景是否返回预期的 gRPC 状态码。
credentials.PerRPCCredentials 并在身份验证拦截器中验证令牌| 设置 | 目的 | 典型值 |
|---|---|---|
keepalive.ServerParameters.Time | 空闲连接的 Ping 间隔 | 30s |
keepalive.ServerParameters.Timeout | Ping 确认超时 | 10s |
grpc.MaxRecvMsgSize | 覆盖大型负载的 4 MB 默认值 | 16 MB |
| 连接池 | 用于高负载流式传输的多个连接 | 4 个连接 |
大多数服务不需要连接池——在增加复杂性之前先进行性能分析。
| 错误 | 修复方法 |
|---|---|
返回原始 error | 变成 codes.Unknown——客户端无法决定是否重试。使用 status.Errorf 并指定具体代码 |
| 客户端调用没有截止时间 | 缓慢的上游会无限期挂起。始终使用 context.WithTimeout |
| 每个请求新建连接 | 浪费 TCP/TLS 握手。创建一次,重复使用——HTTP/2 复用 RPC |
| 生产环境启用反射 | 让攻击者可以枚举每个方法。仅在开发/预发布环境启用 |
所有错误都用 codes.Internal | 错误的代码会破坏客户端重试逻辑。Unavailable 触发重试;InvalidArgument 不会 |
| 使用裸类型作为 RPC 参数 | 无法向 string 添加字段。包装器消息允许向后兼容的演进 |
| 缺少健康检查服务 | Kubernetes 无法确定就绪状态,在部署期间会杀死 Pod |
| 忽略上下文取消 | 长时间操作在调用者放弃后仍继续。检查 ctx.Err() |
samber/cc-skills-golang@golang-context 技能,了解截止时间和取消模式samber/cc-skills-golang@golang-error-handling 技能,了解 gRPC 错误到 Go 错误的映射samber/cc-skills-golang@golang-observability 技能,了解 gRPC 拦截器(日志记录、跟踪、指标)samber/cc-skills-golang@golang-testing 技能,了解使用 bufconn 进行 gRPC 测试每周安装数
118
仓库
GitHub 星标数
276
首次出现
3 天前
安全审计
安装于
opencode99
gemini-cli96
codex96
cursor96
kimi-cli95
amp95
Persona: You are a Go distributed systems engineer. You design gRPC services for correctness and operability — proper status codes, deadlines, interceptors, and graceful shutdown matter as much as the happy path.
Modes:
Treat gRPC as a pure transport layer — keep it separate from business logic. The official Go implementation is google.golang.org/grpc.
This skill is not exhaustive. Please refer to library documentation and code examples for more informations. Context7 can help as a discoverability platform.
| Concern | Package / Tool |
|---|---|
| Service definition | protoc or buf with .proto files |
| Code generation | protoc-gen-go, protoc-gen-go-grpc |
| Error handling | google.golang.org/grpc/status with codes |
| Rich error details | google.golang.org/genproto/googleapis/rpc/errdetails |
| Interceptors | grpc.ChainUnaryInterceptor, grpc.ChainStreamInterceptor |
| Middleware ecosystem | github.com/grpc-ecosystem/go-grpc-middleware |
| Testing | google.golang.org/grpc/test/bufconn |
| TLS / mTLS | google.golang.org/grpc/credentials |
| Health checks | google.golang.org/grpc/health |
Organize by domain with versioned directories (proto/user/v1/). Always use Request/Response wrapper messages — bare types like string cannot have fields added later. Generate with buf generate or protoc.
Proto & code generation reference
Implement health check service (grpc_health_v1) — Kubernetes probes need it to determine readiness
Use interceptors for cross-cutting concerns (logging, auth, recovery) — keeps business logic clean
Use GracefulStop() with a timeout fallback to Stop() — drains in-flight RPCs while preventing hangs
Disable reflection in production — it exposes your full API surface
srv := grpc.NewServer( grpc.ChainUnaryInterceptor(loggingInterceptor, recoveryInterceptor), ) pb.RegisterUserServiceServer(srv, svc) healthpb.RegisterHealthServer(srv, health.NewServer())
go srv.Serve(lis)
// On shutdown signal: stopped := make(chan struct{}) go func() { srv.GracefulStop(); close(stopped) }() select { case <-stopped: case <-time.After(15 * time.Second): srv.Stop() }
func loggingInterceptor(ctx context.Context, req any, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (any, error) {
start := time.Now()
resp, err := handler(ctx, req)
log.Printf("method=%s duration=%s code=%s", info.FullMethod, time.Since(start), status.Code(err))
return resp, err
}
Reuse connections — gRPC multiplexes RPCs on a single HTTP/2 connection; one-per-request wastes TCP/TLS handshakes
Set deadlines on every call (context.WithTimeout) — without one, a slow upstream hangs goroutines indefinitely
Use round_robin with headless Kubernetes services via dns:/// scheme
Pass metadata (auth tokens, trace IDs) via metadata.NewOutgoingContext
conn, err := grpc.NewClient("dns:///user-service:50051",
grpc.WithTransportCredentials(creds),
grpc.WithDefaultServiceConfig({ "loadBalancingPolicy": "round_robin", "methodConfig": [{ "name": [{"service": ""}], "timeout": "5s", "retryPolicy": { "maxAttempts": 3, "initialBackoff": "0.1s", "maxBackoff": "1s", "backoffMultiplier": 2, "retryableStatusCodes": ["UNAVAILABLE"] } }] }),
)
client := pb.NewUserServiceClient(conn)
Always return gRPC errors using status.Error with a specific code — a raw error becomes codes.Unknown, telling the client nothing actionable. Clients use codes to decide retry vs fail-fast vs degrade.
| Code | When to Use |
|---|---|
InvalidArgument | Malformed input (missing field, bad format) |
NotFound | Entity does not exist |
AlreadyExists | Create failed, entity exists |
PermissionDenied | Caller lacks permission |
Unauthenticated | Missing or invalid token |
FailedPrecondition | System not in required state |
// ✗ Bad — caller gets codes.Unknown, can't decide whether to retry
return nil, fmt.Errorf("user not found")
// ✓ Good — specific code lets clients act appropriately
if errors.Is(err, ErrNotFound) {
return nil, status.Errorf(codes.NotFound, "user %q not found", req.UserId)
}
return nil, status.Errorf(codes.Internal, "lookup failed: %v", err)
For field-level validation errors, attach errdetails.BadRequest via status.WithDetails.
| Pattern | Use Case |
|---|---|
| Server streaming | Server sends a sequence (log tailing, result sets) |
| Client streaming | Client sends a sequence, server responds once (file upload, batch) |
| Bidirectional | Both send independently (chat, real-time sync) |
Prefer streaming over large single messages — avoids per-message size limits and lowers memory pressure.
func (s *server) ListUsers(req *pb.ListUsersRequest, stream pb.UserService_ListUsersServer) error {
for _, u := range users {
if err := stream.Send(u); err != nil {
return err
}
}
return nil
}
Use bufconn for in-memory connections that exercise the full gRPC stack (serialization, interceptors, metadata) without network overhead. Always test that error scenarios return the expected gRPC status codes.
credentials.PerRPCCredentials and validate tokens in an auth interceptor| Setting | Purpose | Typical Value |
|---|---|---|
keepalive.ServerParameters.Time | Ping interval for idle connections | 30s |
keepalive.ServerParameters.Timeout | Ping ack timeout | 10s |
grpc.MaxRecvMsgSize | Override 4 MB default for large payloads | 16 MB |
| Connection pooling | Multiple conns for high-load streaming | 4 connections |
Most services do not need connection pooling — profile before adding complexity.
| Mistake | Fix |
|---|---|
Returning raw error | Becomes codes.Unknown — client can't decide whether to retry. Use status.Errorf with a specific code |
| No deadline on client calls | Slow upstream hangs indefinitely. Always context.WithTimeout |
| New connection per request | Wastes TCP/TLS handshakes. Create once, reuse — HTTP/2 multiplexes RPCs |
| Reflection enabled in production | Lets attackers enumerate every method. Enable only in dev/staging |
codes.Internal for all errors | Wrong codes break client retry logic. Unavailable triggers retry; does not |
samber/cc-skills-golang@golang-context skill for deadline and cancellation patternssamber/cc-skills-golang@golang-error-handling skill for gRPC error to Go error mappingsamber/cc-skills-golang@golang-observability skill for gRPC interceptors (logging, tracing, metrics)samber/cc-skills-golang@golang-testing skill for gRPC testing with bufconnWeekly Installs
118
Repository
GitHub Stars
276
First Seen
3 days ago
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode99
gemini-cli96
codex96
cursor96
kimi-cli95
amp95
Go性能优化指南:从分析到实践,提升Golang应用性能
690 周安装
ResourceExhausted | Rate limit or quota exceeded |
Unavailable | Transient issue, safe to retry |
Internal | Unexpected bug |
DeadlineExceeded | Timeout |
InvalidArgument| Bare types as RPC arguments | Can't add fields to string. Wrapper messages allow backwards-compatible evolution |
| Missing health check service | Kubernetes can't determine readiness, kills pods during deployments |
| Ignoring context cancellation | Long operations continue after caller gave up. Check ctx.Err() |