docker-best-practices by josiahsiegel/claude-plugin-marketplace
npx skills add https://github.com/josiahsiegel/claude-plugin-marketplace --skill docker-best-practices强制要求:在 Windows 上始终对文件路径使用反斜杠
在 Windows 上使用编辑或写入工具时,文件路径中必须使用反斜杠 (\),而不是正斜杠 (/)。
示例:
D:/repos/project/file.tsxD:\repos\project\file.tsx这适用于:
除非用户明确要求,否则切勿创建新的文档文件。
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
此技能提供涵盖容器开发、部署和运维所有方面的当前 Docker 最佳实践。
2025 年推荐层级:
cgr.dev/chainguard/*) - 零 CVE 目标,包含 SBOMalpine:3.19) - 约 7MB,攻击面最小gcr.io/distroless/*) - 约 2MB,无 shellnode:20-slim) - 约 70MB,平衡关键规则:
node:20.11.0-alpine3.19latest(不可预测,破坏可重现性)最优层顺序(从最不频繁更改到最频繁更改):
1. 基础镜像和系统依赖项
2. 应用程序依赖项 (package.json, requirements.txt 等)
3. 应用程序代码
4. 配置和元数据
原理: Docker 会缓存层。如果代码更改但依赖项未更改,则会重用缓存的依赖项层,从而加快构建速度。
示例:
FROM python:3.12-slim
# 1. 系统包(很少更改)
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
&& rm -rf /var/lib/apt/lists/*
# 2. 依赖项(偶尔更改)
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# 3. 应用程序代码(频繁更改)
COPY . /app
WORKDIR /app
CMD ["python", "app.py"]
使用多阶段构建将构建依赖项与运行时分离:
# 构建阶段
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# 生产阶段
FROM node:20-alpine AS runtime
WORKDIR /app
# 仅复制运行时所需内容
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
USER node
CMD ["node", "dist/server.js"]
优势:
合并命令以减少层数和镜像大小:
# 不好 - 3 层,清理不会减小大小
RUN apt-get update
RUN apt-get install -y curl
RUN rm -rf /var/lib/apt/lists/*
# 好 - 1 层,清理有效
RUN apt-get update && \
apt-get install -y --no-install-recommends curl && \
rm -rf /var/lib/apt/lists/*
始终创建 .dockerignore 以排除不必要的文件:
# 版本控制
.git
.gitignore
# 依赖项
node_modules
__pycache__
*.pyc
# IDE
.vscode
.idea
# 操作系统
.DS_Store
Thumbs.db
# 日志
*.log
logs/
# 测试
coverage/
.nyc_output
*.test.js
# 文档
README.md
docs/
# 环境
.env
.env.local
*.local
docker run \
# 以非 root 用户运行
--user 1000:1000 \
# 丢弃所有能力,仅添加所需的能力
--cap-drop=ALL \
--cap-add=NET_BIND_SERVICE \
# 只读文件系统
--read-only \
# 临时可写文件系统
--tmpfs /tmp:noexec,nosuid \
# 无新权限
--security-opt="no-new-privileges:true" \
# 资源限制
--memory="512m" \
--cpus="1.0" \
my-image
在生产环境中始终设置资源限制:
# docker-compose.yml
services:
app:
deploy:
resources:
limits:
cpus: '2.0'
memory: 1G
reservations:
cpus: '1.0'
memory: 512M
为所有长时间运行的容器实施健康检查:
HEALTHCHECK --interval=30s --timeout=3s --retries=3 --start-period=40s \
CMD curl -f http://localhost:3000/health || exit 1
或在 compose 中:
services:
app:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 30s
timeout: 3s
retries: 3
start_period: 40s
配置适当的日志记录以防止磁盘空间耗尽:
services:
app:
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
或在 /etc/docker/daemon.json 中全局配置:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
services:
app:
# 用于开发
restart: "no"
# 用于生产
restart: unless-stopped
# 或使用细粒度控制(Swarm 模式)
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
# 不需要 version 字段 (Compose v2.40.3+)
services:
# 服务定义
web:
# ...
api:
# ...
database:
# ...
networks:
# 自定义网络(推荐)
frontend:
backend:
internal: true
volumes:
# 命名卷(推荐用于持久化)
db-data:
app-data:
configs:
# 配置文件(Swarm 模式)
app-config:
file: ./config/app.conf
secrets:
# 密钥(Swarm 模式)
db-password:
file: ./secrets/db_pass.txt
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true # 无外部访问
services:
web:
networks:
- frontend
api:
networks:
- frontend
- backend
database:
networks:
- backend # 无法从前端访问
services:
app:
# 从文件加载(推荐用于非密钥)
env_file:
- .env
# 内联用于服务特定变量
environment:
- NODE_ENV=production
- LOG_LEVEL=info
# 用于 Swarm 模式密钥
secrets:
- db_password
重要提示:
.env 添加到 .gitignore.env.example 作为模板services:
api:
depends_on:
database:
condition: service_healthy # 等待健康检查通过
redis:
condition: service_started # 仅等待启动
# 使用语义化版本控制
my-app:1.2.3
my-app:1.2
my-app:1
my-app:latest
# 包含 git 提交以便追溯
my-app:1.2.3-abc123f
# 环境标签
my-app:1.2.3-production
my-app:1.2.3-staging
切勿这样做:
# 错误 - 密钥在层历史中
ENV API_KEY=secret123
RUN echo "password" > /app/config
这样做:
# 使用 Docker secrets (Swarm) 或外部密钥管理
docker secret create db_password ./password.txt
# 或在运行时挂载密钥
docker run -v /secure/secrets:/run/secrets:ro my-app
# 或使用环境文件(不在镜像中)
docker run --env-file /secure/.env my-app
services:
app:
# 健康检查
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 30s
# 用于监控工具的标签
labels:
- "prometheus.io/scrape=true"
- "prometheus.io/port=9090"
- "com.company.team=backend"
- "com.company.version=1.2.3"
# 日志记录
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# 备份命名卷
docker run --rm \
-v VOLUME_NAME:/data \
-v $(pwd):/backup \
alpine tar czf /backup/backup-$(date +%Y%m%d).tar.gz -C /data .
# 恢复卷
docker run --rm \
-v VOLUME_NAME:/data \
-v $(pwd):/backup \
alpine tar xzf /backup/backup.tar.gz -C /data
services:
app:
# 用于 Swarm 模式 - 滚动更新
deploy:
replicas: 3
update_config:
parallelism: 1 # 一次更新一个
delay: 10s # 更新之间等待 10 秒
failure_action: rollback
monitor: 60s
rollback_config:
parallelism: 1
delay: 5s
使用用户命名空间重映射以增强安全性
利用原生性能优势
使用 Alpine 以获得最小的镜像
配置 SELinux/AppArmor 配置文件
使用 systemd 管理 Docker 守护进程
// /etc/docker/daemon.json { "userns-remap": "default", "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3" }, "storage-driver": "overlay2", "live-restore": true }
在 Docker Desktop 中分配足够的资源
对绑定挂载使用 :delegated 或 :cached
考虑为 ARM (M1/M2) 进行多平台构建
将文件共享限制在必要的目录
volumes:
选择容器类型:Windows 或 Linux
在路径中使用正斜杠
确保 Docker Desktop 中已共享驱动器
注意行尾差异 (CRLF vs LF)
考虑使用 WSL2 后端以获得更好的性能
volumes:
# 使用 BuildKit(更快,缓存更好)
export DOCKER_BUILDKIT=1
# 使用缓存挂载
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r requirements.txt
# 对依赖项使用绑定挂载
RUN --mount=type=bind,source=package.json,target=package.json \
--mount=type=bind,source=package-lock.json,target=package-lock.json \
--mount=type=cache,target=/root/.npm \
npm ci
使用多阶段构建
选择最小的基础镜像
在同一层中清理
使用 .dockerignore
移除构建依赖项
RUN apt-get update &&
apt-get install -y --no-install-recommends
package1
package2 &&
apt-get clean &&
rm -rf /var/lib/apt/lists/*
# 使用 exec 形式(无 shell 开销)
CMD ["node", "server.js"] # 好
# 对比
CMD node server.js # 不好 - 产生 shell
# 优化信号
STOPSIGNAL SIGTERM
# 以非 root 用户运行(稍快,更安全)
USER appuser
镜像安全:
运行时安全:
合规性:
❌ 不要:
--privilegedlatest 标签✅ 要:
此技能代表了当前的 Docker 最佳实践。由于 Docker 不断发展,请务必根据官方文档验证最新建议。
每周安装
493
仓库
GitHub 星标
21
首次出现
2026 年 1 月 21 日
安全审计
安装于
opencode413
gemini-cli396
codex389
github-copilot371
cursor348
kimi-cli293
MANDATORY: Always Use Backslashes on Windows for File Paths
When using Edit or Write tools on Windows, you MUST use backslashes (\) in file paths, NOT forward slashes (/).
Examples:
D:/repos/project/file.tsxD:\repos\project\file.tsxThis applies to:
NEVER create new documentation files unless explicitly requested by the user.
This skill provides current Docker best practices across all aspects of container development, deployment, and operation.
2025 Recommended Hierarchy:
cgr.dev/chainguard/*) - Zero-CVE goal, SBOM includedalpine:3.19) - ~7MB, minimal attack surfacegcr.io/distroless/*) - ~2MB, no shellnode:20-slim) - ~70MB, balancedKey rules:
node:20.11.0-alpine3.19latest (unpredictable, breaks reproducibility)Optimal layer ordering (least to most frequently changing):
1. Base image and system dependencies
2. Application dependencies (package.json, requirements.txt, etc.)
3. Application code
4. Configuration and metadata
Rationale: Docker caches layers. If code changes but dependencies don't, cached dependency layers are reused, speeding up builds.
Example:
FROM python:3.12-slim
# 1. System packages (rarely change)
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
&& rm -rf /var/lib/apt/lists/*
# 2. Dependencies (change occasionally)
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# 3. Application code (changes frequently)
COPY . /app
WORKDIR /app
CMD ["python", "app.py"]
Use multi-stage builds to separate build dependencies from runtime:
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:20-alpine AS runtime
WORKDIR /app
# Only copy what's needed for runtime
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
USER node
CMD ["node", "dist/server.js"]
Benefits:
Combine commands to reduce layers and image size:
# Bad - 3 layers, cleanup doesn't reduce size
RUN apt-get update
RUN apt-get install -y curl
RUN rm -rf /var/lib/apt/lists/*
# Good - 1 layer, cleanup effective
RUN apt-get update && \
apt-get install -y --no-install-recommends curl && \
rm -rf /var/lib/apt/lists/*
Always create .dockerignore to exclude unnecessary files:
# Version control
.git
.gitignore
# Dependencies
node_modules
__pycache__
*.pyc
# IDE
.vscode
.idea
# OS
.DS_Store
Thumbs.db
# Logs
*.log
logs/
# Testing
coverage/
.nyc_output
*.test.js
# Documentation
README.md
docs/
# Environment
.env
.env.local
*.local
docker run \
# Run as non-root
--user 1000:1000 \
# Drop all capabilities, add only needed ones
--cap-drop=ALL \
--cap-add=NET_BIND_SERVICE \
# Read-only filesystem
--read-only \
# Temporary writable filesystems
--tmpfs /tmp:noexec,nosuid \
# No new privileges
--security-opt="no-new-privileges:true" \
# Resource limits
--memory="512m" \
--cpus="1.0" \
my-image
Always set resource limits in production:
# docker-compose.yml
services:
app:
deploy:
resources:
limits:
cpus: '2.0'
memory: 1G
reservations:
cpus: '1.0'
memory: 512M
Implement health checks for all long-running containers:
HEALTHCHECK --interval=30s --timeout=3s --retries=3 --start-period=40s \
CMD curl -f http://localhost:3000/health || exit 1
Or in compose:
services:
app:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 30s
timeout: 3s
retries: 3
start_period: 40s
Configure proper logging to prevent disk fill-up:
services:
app:
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
Or system-wide in /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
services:
app:
# For development
restart: "no"
# For production
restart: unless-stopped
# Or with fine-grained control (Swarm mode)
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
# No version field needed (Compose v2.40.3+)
services:
# Service definitions
web:
# ...
api:
# ...
database:
# ...
networks:
# Custom networks (preferred)
frontend:
backend:
internal: true
volumes:
# Named volumes (preferred for persistence)
db-data:
app-data:
configs:
# Configuration files (Swarm mode)
app-config:
file: ./config/app.conf
secrets:
# Secrets (Swarm mode)
db-password:
file: ./secrets/db_pass.txt
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true # No external access
services:
web:
networks:
- frontend
api:
networks:
- frontend
- backend
database:
networks:
- backend # Not accessible from frontend
services:
app:
# Load from file (preferred for non-secrets)
env_file:
- .env
# Inline for service-specific vars
environment:
- NODE_ENV=production
- LOG_LEVEL=info
# For Swarm mode secrets
secrets:
- db_password
Important:
.env to .gitignore.env.example as templateservices:
api:
depends_on:
database:
condition: service_healthy # Wait for health check
redis:
condition: service_started # Just wait for start
# Use semantic versioning
my-app:1.2.3
my-app:1.2
my-app:1
my-app:latest
# Include git commit for traceability
my-app:1.2.3-abc123f
# Environment tags
my-app:1.2.3-production
my-app:1.2.3-staging
Never do this:
# BAD - secret in layer history
ENV API_KEY=secret123
RUN echo "password" > /app/config
Do this:
# Use Docker secrets (Swarm) or external secret management
docker secret create db_password ./password.txt
# Or mount secrets at runtime
docker run -v /secure/secrets:/run/secrets:ro my-app
# Or use environment files (not in image)
docker run --env-file /secure/.env my-app
services:
app:
# Health checks
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 30s
# Labels for monitoring tools
labels:
- "prometheus.io/scrape=true"
- "prometheus.io/port=9090"
- "com.company.team=backend"
- "com.company.version=1.2.3"
# Logging
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# Backup named volume
docker run --rm \
-v VOLUME_NAME:/data \
-v $(pwd):/backup \
alpine tar czf /backup/backup-$(date +%Y%m%d).tar.gz -C /data .
# Restore volume
docker run --rm \
-v VOLUME_NAME:/data \
-v $(pwd):/backup \
alpine tar xzf /backup/backup.tar.gz -C /data
services:
app:
# For Swarm mode - rolling updates
deploy:
replicas: 3
update_config:
parallelism: 1 # Update 1 at a time
delay: 10s # Wait 10s between updates
failure_action: rollback
monitor: 60s
rollback_config:
parallelism: 1
delay: 5s
Use user namespace remapping for added security
Leverage native performance advantages
Use Alpine for smallest images
Configure SELinux/AppArmor profiles
Use systemd for Docker daemon management
// /etc/docker/daemon.json { "userns-remap": "default", "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3" }, "storage-driver": "overlay2", "live-restore": true }
Allocate sufficient resources in Docker Desktop
Use :delegated or :cached for bind mounts
Consider multi-platform builds for ARM (M1/M2)
Limit file sharing to necessary directories
volumes:
Choose container type: Windows or Linux
Use forward slashes in paths
Ensure drives are shared in Docker Desktop
Be aware of line ending differences (CRLF vs LF)
Consider WSL2 backend for better performance
volumes:
# Use BuildKit (faster, better caching)
export DOCKER_BUILDKIT=1
# Use cache mounts
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r requirements.txt
# Use bind mounts for dependencies
RUN --mount=type=bind,source=package.json,target=package.json \
--mount=type=bind,source=package-lock.json,target=package-lock.json \
--mount=type=cache,target=/root/.npm \
npm ci
Use multi-stage builds
Choose minimal base images
Clean up in the same layer
Use .dockerignore
Remove build dependencies
RUN apt-get update &&
apt-get install -y --no-install-recommends
package1
package2 &&
apt-get clean &&
rm -rf /var/lib/apt/lists/*
# Use exec form (no shell overhead)
CMD ["node", "server.js"] # Good
# vs
CMD node server.js # Bad - spawns shell
# Optimize signals
STOPSIGNAL SIGTERM
# Run as non-root (slightly faster, much more secure)
USER appuser
Image Security:
Runtime Security:
Compliance:
❌ Don't:
--privilegedlatest tag✅ Do:
This skill represents current Docker best practices. Always verify against official documentation for the latest recommendations, as Docker evolves continuously.
Weekly Installs
493
Repository
GitHub Stars
21
First Seen
Jan 21, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
opencode413
gemini-cli396
codex389
github-copilot371
cursor348
kimi-cli293
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
106,200 周安装