npx skills add https://github.com/bobmatnyc/claude-mpm-skills --skill dockerDocker 提供容器化功能,可将应用程序及其依赖项打包成独立的、可移植的单元。容器确保了开发、测试和生产环境的一致性,消除了“在我机器上能运行”的问题。
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
docker build -t myapp:1.0 .
docker run -p 3000:3000 myapp:1.0
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
每个 Dockerfile 指令都会创建一个层。Docker 会缓存未更改的层以加速构建。
# 良好实践:依赖项比代码变更频率低
FROM python:3.11-slim
COPY requirements.txt .
RUN pip install -r requirements.txt # 除非 requirements.txt 变更,否则会被缓存
COPY . . # 仅在代码变更时重建
# 不良实践:每次代码变更都会使缓存失效
FROM python:3.11-slim
COPY . . # 变更频繁
RUN pip install -r requirements.txt # 每次构建都会重新安装
持久化数据存储,在容器重启后依然存在。
# 命名卷(由 Docker 管理)
docker run -v mydata:/app/data myapp
# 绑定挂载(主机目录)
docker run -v $(pwd)/data:/app/data myapp
# 匿名卷(临时)
docker run -v /app/data myapp
容器通过 Docker 网络进行通信。
# 创建网络
docker network create mynetwork
# 在网络中运行容器
docker run --network mynetwork --name db postgres
docker run --network mynetwork --name app myapp
# 应用可以使用主机名 "db" 连接到数据库
# 基础镜像
FROM node:18-alpine
# 元数据
LABEL maintainer="dev@example.com"
LABEL version="1.0"
# 设置工作目录
WORKDIR /app
# 复制文件
COPY package*.json ./
COPY src/ ./src/
# 运行命令(创建层)
RUN npm ci --only=production
# 设置环境变量
ENV NODE_ENV=production
ENV PORT=3000
# 暴露端口(仅作说明)
EXPOSE 3000
# 默认命令
CMD ["node", "src/server.js"]
# 替代方案:ENTRYPOINT(不会被 docker run 参数覆盖)
ENTRYPOINT ["node"]
CMD ["src/server.js"] # ENTRYPOINT 的默认参数
# 1. 基础镜像(很少变更)
FROM python:3.11-slim
# 2. 系统依赖项(很少变更)
RUN apt-get update && apt-get install -y \
gcc \
&& rm -rf /var/lib/apt/lists/*
# 3. 应用程序依赖项(偶尔变更)
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# 4. 应用程序代码(频繁变更)
COPY . .
# 5. 运行时配置
ENV PYTHONUNBUFFERED=1
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
从构建上下文中排除文件(加速构建,减小镜像体积)。
# .dockerignore
node_modules/
npm-debug.log
.git/
.gitignore
*.md
.env
.vscode/
__pycache__/
*.pyc
.pytest_cache/
coverage/
dist/
build/
通过分离构建和运行时阶段来优化镜像大小。
# 构建阶段
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# 生产阶段
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
EXPOSE 3000
CMD ["node", "dist/server.js"]
优势:
# 构建阶段
FROM python:3.11 AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt
# 运行时阶段
FROM python:3.11-slim
WORKDIR /app
COPY --from=builder /root/.local /root/.local
COPY . .
ENV PATH=/root/.local/bin:$PATH
CMD ["python", "app.py"]
# 构建阶段
FROM golang:1.21-alpine AS builder
WORKDIR /app
COPY go.* ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o server
# 运行时阶段(scratch = 空基础镜像)
FROM scratch
COPY --from=builder /app/server /server
EXPOSE 8080
ENTRYPOINT ["/server"]
结果:约 10MB 的最终镜像,仅包含编译后的二进制文件。
在 YAML 中定义多容器应用程序。
version: '3.8'
services:
app:
build: .
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgres://db:5432/myapp
depends_on:
- db
volumes:
- ./src:/app/src # 开发环境下的热重载
db:
image: postgres:15-alpine
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: myapp
volumes:
- db_data:/var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
db_data:
# 启动所有服务
docker-compose up
# 在后台启动
docker-compose up -d
# 重新构建镜像
docker-compose up --build
# 停止服务
docker-compose down
# 停止并移除卷
docker-compose down -v
# 查看日志
docker-compose logs -f app
# 运行一次性命令
docker-compose run app npm test
version: '3.8'
services:
# 前端
web:
build:
context: ./frontend
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- ./frontend/src:/app/src
environment:
- REACT_APP_API_URL=http://localhost:8000
# 后端 API
api:
build: ./backend
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://postgres:secret@db:5432/myapp
- REDIS_URL=redis://redis:6379
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
volumes:
- ./backend:/app
command: uvicorn main:app --host 0.0.0.0 --reload
# 数据库
db:
image: postgres:15-alpine
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: myapp
volumes:
- db_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
# 缓存
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
# 工作器(后台作业)
worker:
build: ./backend
command: celery -A tasks worker --loglevel=info
environment:
- REDIS_URL=redis://redis:6379
depends_on:
- redis
- db
volumes:
db_data:
redis_data:
networks:
default:
name: myapp_network
services:
app:
build: .
volumes:
- ./src:/app/src # 同步源代码
- /app/node_modules # 防止覆盖容器的 node_modules
command: npm run dev
# Dockerfile.dev
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install # 包含开发依赖项
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev"]
services:
web:
build: .
volumes:
- .:/app
command: python manage.py runserver 0.0.0.0:8000
# 或对于 FastAPI:
# command: uvicorn main:app --host 0.0.0.0 --reload
.devcontainer/devcontainer.json:
{
"name": "Python Dev Container",
"dockerComposeFile": "../docker-compose.yml",
"service": "app",
"workspaceFolder": "/app",
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance"
],
"settings": {
"python.defaultInterpreterPath": "/usr/local/bin/python"
}
}
},
"postCreateCommand": "pip install -r requirements-dev.txt",
"remoteUser": "vscode"
}
# PostgreSQL
docker run -d \
--name dev-postgres \
-e POSTGRES_PASSWORD=localdev \
-e POSTGRES_DB=myapp_dev \
-p 5432:5432 \
-v pgdata:/var/lib/postgresql/data \
postgres:15-alpine
# MySQL
docker run -d \
--name dev-mysql \
-e MYSQL_ROOT_PASSWORD=localdev \
-e MYSQL_DATABASE=myapp_dev \
-p 3306:3306 \
-v mysqldata:/var/lib/mysql \
mysql:8
# MongoDB
docker run -d \
--name dev-mongo \
-p 27017:27017 \
-v mongodata:/data/db \
mongo:7
# Redis
docker run -d \
--name dev-redis \
-p 6379:6379 \
redis:7-alpine
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
# 健康检查端点
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD node healthcheck.js
EXPOSE 3000
CMD ["node", "server.js"]
// healthcheck.js
const http = require('http');
const options = {
host: 'localhost',
port: 3000,
path: '/health',
timeout: 2000
};
const request = http.request(options, (res) => {
if (res.statusCode === 200) {
process.exit(0);
} else {
process.exit(1);
}
});
request.on('error', () => process.exit(1));
request.end();
FROM python:3.11-slim
# 1. 使用非 root 用户
RUN groupadd -r appuser && useradd -r -g appuser appuser
# 2. 以 root 身份安装依赖项
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# 3. 复制应用程序文件
COPY --chown=appuser:appuser . .
# 4. 切换到非 root 用户
USER appuser
# 5. 丢弃不必要的权限
EXPOSE 8000
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "app:app"]
附加安全措施:
docker scan myapp:latest# Docker Swarm 密钥(生产环境)
echo "db_password_here" | docker secret create db_password -
version: '3.8'
services:
app:
image: myapp
secrets:
- db_password
environment:
- DB_PASSWORD_FILE=/run/secrets/db_password
secrets:
db_password:
external: true
替代方案:环境文件
# docker-compose.yml
services:
app:
env_file:
- .env.production # 切勿提交此文件
# .env.production (被 git 忽略)
DATABASE_URL=postgresql://user:pass@db:5432/prod
SECRET_KEY=your-secret-key
services:
app:
image: myapp
deploy:
resources:
limits:
cpus: '1.0'
memory: 512M
reservations:
cpus: '0.5'
memory: 256M
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
# 命令行资源限制
docker run -d \
--memory="512m" \
--cpus="1.0" \
--restart=unless-stopped \
myapp
FROM python:3.11-slim
ENV PYTHONUNBUFFERED=1
ENV PYTHONDONTWRITEBYTECODE=1
WORKDIR /app
# 安装系统依赖项
RUN apt-get update && apt-get install -y \
postgresql-client \
&& rm -rf /var/lib/apt/lists/*
# 安装 Python 依赖项
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# 复制应用程序
COPY . .
# 收集静态文件
RUN python manage.py collectstatic --noinput
# 创建非 root 用户
RUN useradd -m -u 1000 django && chown -R django:django /app
USER django
EXPOSE 8000
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "4", "myproject.wsgi:application"]
docker-compose.yml:
version: '3.8'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
environment:
- DEBUG=1
- DATABASE_URL=postgres://postgres:postgres@db:5432/django_dev
depends_on:
- db
db:
image: postgres:15-alpine
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: django_dev
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "4"]
# Next.js 的多阶段构建
FROM node:18-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci
FROM node:18-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM node:18-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
CMD ["node", "server.js"]
next.config.js(独立输出所需):
module.exports = {
output: 'standalone',
}
FROM node:18-alpine
WORKDIR /app
# 安装依赖项
COPY package*.json ./
RUN npm ci --only=production
# 复制应用程序
COPY . .
# 创建非 root 用户
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
EXPOSE 3000
CMD ["node", "server.js"]
# 构建阶段
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json tsconfig.json ./
RUN npm ci
COPY src/ ./src/
RUN npm run build
# 生产阶段
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
EXPOSE 3000
CMD ["node", "dist/index.js"]
# 初始化 swarm
docker swarm init
# 部署堆栈
docker stack deploy -c docker-compose.yml myapp
# 扩展服务
docker service scale myapp_web=5
# 更新服务(零停机时间)
docker service update --image myapp:2.0 myapp_web
# 移除堆栈
docker stack rm myapp
| 功能 | Docker Compose | Docker Swarm | Kubernetes |
|---|---|---|---|
| 复杂度 | 低 | 中 | 高 |
| 使用场景 | 本地开发 | 小型集群 | 大规模生产 |
| 设置 | 单个文件 | 内置 | 独立安装 |
| 扩展 | 手动 | 自动 | 自动 + 高级 |
| 高可用性 | 否 | 是 | 是 |
| 生态系统 | 有限 | Docker | 庞大 |
何时使用每种方案:
# .github/workflows/docker.yml
name: 构建并推送 Docker 镜像
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v3
- name: 登录到容器注册表
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: 提取元数据
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=semver,pattern={{version}}
type=sha
- name: 构建并推送
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: 运行测试
run: |
docker run --rm ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }} npm test
# .gitlab-ci.yml
stages:
- build
- test
- deploy
variables:
DOCKER_DRIVER: overlay2
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
build:
stage: build
image: docker:latest
services:
- docker:dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
test:
stage: test
script:
- docker run --rm $IMAGE_TAG npm test
deploy:
stage: deploy
script:
- docker pull $IMAGE_TAG
- docker tag $IMAGE_TAG $CI_REGISTRY_IMAGE:latest
- docker push $CI_REGISTRY_IMAGE:latest
only:
- main
# 流式日志
docker logs -f container_name
# 最后 100 行
docker logs --tail 100 container_name
# 自时间戳起的日志
docker logs --since 2024-01-01T10:00:00 container_name
# 带时间戳
docker logs -t container_name
# Docker Compose 日志
docker-compose logs -f service_name
# 交互式 shell
docker exec -it container_name /bin/sh
# 或
docker exec -it container_name /bin/bash
# 运行单个命令
docker exec container_name ls -la /app
# 以不同用户身份运行
docker exec -u root container_name apt-get update
# 完整的容器详情
docker inspect container_name
# 特定字段(IP 地址)
docker inspect -f '{{.NetworkSettings.IPAddress}}' container_name
# 环境变量
docker inspect -f '{{.Config.Env}}' container_name
# 挂载的卷
docker inspect -f '{{.Mounts}}' container_name
# 实时统计
docker stats
# 单个容器
docker stats container_name
# 非流式(单次快照)
docker stats --no-stream
# 列出网络
docker network ls
# 检查网络
docker network inspect bridge
# 测试容器间的连通性
docker exec container1 ping container2
# 检查 DNS 解析
docker exec container_name nslookup other_container
# 无缓存构建
docker build --no-cache -t myapp .
# 显示构建进度
docker build --progress=plain -t myapp .
# 构建特定阶段
docker build --target builder -t myapp-builder .
# 检查中间层
docker history myapp:latest
docker-compose.yml 运行 mcp-server,端口范围 8875-8895,可选 chrome 配置文件。./src:/app/src:ro),具有持久化的日志和临时卷。MCP_DEBUG=true,MCP_LOG_LEVEL=DEBUG,MCP_HOST=0.0.0.0,MCP_PORT=8875。chrome(浏览器)和 tools(开发工具容器)。ARG PYTHON_VERSION=3.11,安装 watchdog + Playwright Chromium。python -m src.dev_runner。/health 的健康检查。/opt/venv 中使用 venv,基础镜像为 python:3.11-slim。curl 用于健康检查,设置 PYTHONPATH=/app。CMD ["python", "run_api_server.py"],包含 /health 检查。# 查找使用端口的进程
lsof -i :3000
# 或
netstat -tulpn | grep 3000
# 终止进程
kill -9 <PID>
# 或使用不同的主机端口
docker run -p 3001:3000 myapp
# 检查 Docker 是否正在运行
docker info
# 重启 Docker Desktop(Mac/Windows)
# 或
sudo systemctl restart docker # Linux
# 检查权限(Linux)
sudo usermod -aG docker $USER
# 注销并重新登录
# 移除未使用的容器、镜像、卷
docker system prune -a --volumes
# 仅移除悬空镜像
docker image prune
# 移除已停止的容器
docker container prune
# 移除未使用的卷
docker volume prune
# 检查磁盘使用情况
docker system df
# 创建 .dockerignore
cat > .dockerignore << EOF
node_modules/
.git/
*.log
dist/
coverage/
EOF
# 使用特定上下文构建
docker build -f Dockerfile -t myapp ./src
# 检查日志
docker logs container_name
# 使用交互式 shell 运行以进行调试
docker run -it myapp /bin/sh
# 覆盖入口点
docker run -it --entrypoint /bin/sh myapp
# 检查退出代码
docker inspect -f '{{.State.ExitCode}}' container_name
# 以 root 身份运行以进行调试
docker exec -u root -it container_name /bin/sh
# 修复所有权
docker exec -u root container_name chown -R appuser:appuser /app
# 或在 Dockerfile 中重建并设置正确的权限
# 使用 BuildKit(更快,更好的缓存)
DOCKER_BUILDKIT=1 docker build -t myapp .
# 使用多阶段构建以减少层数
# 按变更频率排序指令
# 使用 .dockerignore 排除不必要的文件
# 设置内存限制
docker run -m 512m myapp
# 监控内存
docker stats container_name
# 检查应用程序中的内存泄漏
# 使用委托一致性(Mac)
volumes:
- ./src:/app/src:delegated
# 或使用命名卷代替绑定挂载
volumes:
- node_modules:/app/node_modules
# 良好实践
RUN apt-get update && apt-get install -y \
package1 \
package2 \
&& rm -rf /var/lib/apt/lists/*
# 不良实践(创建 3 层,apt 缓存保留在第 2 层)
RUN apt-get update
RUN apt-get install -y package1 package2
RUN rm -rf /var/lib/apt/lists/*
# 之前:800MB
FROM node:18
COPY . .
RUN npm install
CMD ["node", "server.js"]
# 之后:120MB
FROM node:18-alpine
COPY package*.json ./
RUN npm ci --only=production
COPY server.js .
CMD ["node", "server.js"]
latest)开发:
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install # 包含开发依赖项
COPY . .
CMD ["npm", "run", "dev"]
生产:
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
USER node
CMD ["node", "dist/server.js"]
# 记录到 stdout/stderr(Docker 会捕获这些)
CMD ["node", "server.js"] # 良好
# 不要记录到文件(容器停止时会丢失)
CMD ["node", "server.js", ">", "app.log"] # 不良
// 应用程序日志记录
console.log('信息消息'); // stdout
console.error('错误消息'); // stderr
// 使用结构化日志记录
console.log(JSON.stringify({
level: 'info',
timestamp: new Date().toISOString(),
message: '请求已处理',
requestId: '123'
}));
# 使用 ARG 作为构建时变量
ARG NODE_ENV=production
ENV NODE_ENV=$NODE_ENV
# 使用 ENV 作为运行时变量
ENV PORT=3000
ENV LOG_LEVEL=info
# 在运行时覆盖
# docker run -e PORT=8080 -e LOG_LEVEL=debug myapp
HEALTHCHECK --interval=30s --timeout=3s --start-period=40s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
# docker-compose.yml
services:
app:
image: myapp
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 3s
retries: 3
start_period: 40s
// Node.js 示例
process.on('SIGTERM', () => {
console.log('收到 SIGTERM,正在关闭服务器...');
server.close(() => {
console.log('服务器已关闭');
process.exit(0);
});
});
# 使用 exec 形式以正确处理信号
CMD ["node", "server.js"] # 良好
CMD node server.js # 不良(包装在 /bin/sh 中,信号无法转发)
# 镜像
docker build -t name:tag .
docker pull image:tag
docker push image:tag
docker images
docker rmi image:tag
# 容器
docker run -d --name container image
docker ps # 运行中的容器
docker ps -a # 所有容器
docker stop container
docker start container
docker restart container
docker rm container
docker logs -f container
docker exec -it container /bin/sh
# 清理
docker system prune -a # 移除所有未使用的资源
docker container prune # 移除已停止的容器
docker image prune # 移除悬空镜像
docker volume prune # 移除未使用的卷
# Compose
docker-compose up -d
docker-compose down
docker-compose logs -f
docker-compose exec service /bin/sh
docker-compose build
# docker run 标志
-d # 分离模式(后台)
-it # 交互式并分配 TTY
-p 8080:80 # 端口映射(主机:容器)
--name myapp # 容器名称
-e VAR=value # 环境变量
-v /host:/container # 卷挂载
--network name # 连接到网络
--rm # 退出时移除容器
-m 512m # 内存限制
--cpus 1.0 # CPU 限制
FROM image:tag # 基础镜像
WORKDIR /path # 设置工作目录
COPY src dst # 复制文件
ADD src dst # 复制(支持 URL/压缩包)
RUN command # 执行命令
ENV KEY=value # 环境变量
EXPOSE port # 声明端口
CMD ["executable"] # 默认命令
ENTRYPOINT ["exec"] # 命令前缀
VOLUME /path # 创建挂载点
USER username # 设置用户
ARG name=default # 构建参数
LABEL key=value # 元数据
HEALTHCHECK CMD command # 健康检查
Docker
Docker provides containerization for packaging applications with their dependencies into isolated, portable units. Containers ensure consistency across development, testing, and production environments, eliminating "works on my machine" problems.
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
docker build -t myapp:1.0 .
docker run -p 3000:3000 myapp:1.0
Each Dockerfile instruction creates a layer. Docker caches unchanged layers for faster builds.
# GOOD: Dependencies change less frequently than code
FROM python:3.11-slim
COPY requirements.txt .
RUN pip install -r requirements.txt # Cached unless requirements.txt changes
COPY . . # Rebuild only when code changes
# BAD: Invalidates cache on every code change
FROM python:3.11-slim
COPY . . # Changes frequently
RUN pip install -r requirements.txt # Reinstalls on every build
Persistent data storage that survives container restarts.
# Named volume (managed by Docker)
docker run -v mydata:/app/data myapp
# Bind mount (host directory)
docker run -v $(pwd)/data:/app/data myapp
# Anonymous volume (temporary)
docker run -v /app/data myapp
Containers communicate through Docker networks.
# Create network
docker network create mynetwork
# Run containers on network
docker run --network mynetwork --name db postgres
docker run --network mynetwork --name app myapp
# App can connect to db using hostname "db"
# Base image
FROM node:18-alpine
# Metadata
LABEL maintainer="dev@example.com"
LABEL version="1.0"
# Set working directory
WORKDIR /app
# Copy files
COPY package*.json ./
COPY src/ ./src/
# Run commands (creates layer)
RUN npm ci --only=production
# Set environment variables
ENV NODE_ENV=production
ENV PORT=3000
# Expose ports (documentation only)
EXPOSE 3000
# Default command
CMD ["node", "src/server.js"]
# Alternative: ENTRYPOINT (not overridden by docker run args)
ENTRYPOINT ["node"]
CMD ["src/server.js"] # Default args for ENTRYPOINT
# 1. Base image (rarely changes)
FROM python:3.11-slim
# 2. System dependencies (rarely change)
RUN apt-get update && apt-get install -y \
gcc \
&& rm -rf /var/lib/apt/lists/*
# 3. Application dependencies (change occasionally)
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# 4. Application code (changes frequently)
COPY . .
# 5. Runtime configuration
ENV PYTHONUNBUFFERED=1
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Exclude files from build context (faster builds, smaller images).
# .dockerignore
node_modules/
npm-debug.log
.git/
.gitignore
*.md
.env
.vscode/
__pycache__/
*.pyc
.pytest_cache/
coverage/
dist/
build/
Optimize image size by separating build and runtime stages.
# Build stage
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
EXPOSE 3000
CMD ["node", "dist/server.js"]
Benefits :
# Build stage
FROM python:3.11 AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt
# Runtime stage
FROM python:3.11-slim
WORKDIR /app
COPY --from=builder /root/.local /root/.local
COPY . .
ENV PATH=/root/.local/bin:$PATH
CMD ["python", "app.py"]
# Build stage
FROM golang:1.21-alpine AS builder
WORKDIR /app
COPY go.* ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o server
# Runtime stage (scratch = empty base image)
FROM scratch
COPY --from=builder /app/server /server
EXPOSE 8080
ENTRYPOINT ["/server"]
Result: ~10MB final image containing only the compiled binary.
Define multi-container applications in YAML.
version: '3.8'
services:
app:
build: .
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgres://db:5432/myapp
depends_on:
- db
volumes:
- ./src:/app/src # Hot reload in development
db:
image: postgres:15-alpine
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: myapp
volumes:
- db_data:/var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
db_data:
# Start all services
docker-compose up
# Start in background
docker-compose up -d
# Rebuild images
docker-compose up --build
# Stop services
docker-compose down
# Stop and remove volumes
docker-compose down -v
# View logs
docker-compose logs -f app
# Run one-off command
docker-compose run app npm test
version: '3.8'
services:
# Frontend
web:
build:
context: ./frontend
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- ./frontend/src:/app/src
environment:
- REACT_APP_API_URL=http://localhost:8000
# Backend API
api:
build: ./backend
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://postgres:secret@db:5432/myapp
- REDIS_URL=redis://redis:6379
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
volumes:
- ./backend:/app
command: uvicorn main:app --host 0.0.0.0 --reload
# Database
db:
image: postgres:15-alpine
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: myapp
volumes:
- db_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
# Cache
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
# Worker (background jobs)
worker:
build: ./backend
command: celery -A tasks worker --loglevel=info
environment:
- REDIS_URL=redis://redis:6379
depends_on:
- redis
- db
volumes:
db_data:
redis_data:
networks:
default:
name: myapp_network
services:
app:
build: .
volumes:
- ./src:/app/src # Sync source code
- /app/node_modules # Prevent overwriting container's node_modules
command: npm run dev
# Dockerfile.dev
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install # Include dev dependencies
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev"]
services:
web:
build: .
volumes:
- .:/app
command: python manage.py runserver 0.0.0.0:8000
# or for FastAPI:
# command: uvicorn main:app --host 0.0.0.0 --reload
.devcontainer/devcontainer.json:
{
"name": "Python Dev Container",
"dockerComposeFile": "../docker-compose.yml",
"service": "app",
"workspaceFolder": "/app",
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance"
],
"settings": {
"python.defaultInterpreterPath": "/usr/local/bin/python"
}
}
},
"postCreateCommand": "pip install -r requirements-dev.txt",
"remoteUser": "vscode"
}
# PostgreSQL
docker run -d \
--name dev-postgres \
-e POSTGRES_PASSWORD=localdev \
-e POSTGRES_DB=myapp_dev \
-p 5432:5432 \
-v pgdata:/var/lib/postgresql/data \
postgres:15-alpine
# MySQL
docker run -d \
--name dev-mysql \
-e MYSQL_ROOT_PASSWORD=localdev \
-e MYSQL_DATABASE=myapp_dev \
-p 3306:3306 \
-v mysqldata:/var/lib/mysql \
mysql:8
# MongoDB
docker run -d \
--name dev-mongo \
-p 27017:27017 \
-v mongodata:/data/db \
mongo:7
# Redis
docker run -d \
--name dev-redis \
-p 6379:6379 \
redis:7-alpine
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
# Health check endpoint
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD node healthcheck.js
EXPOSE 3000
CMD ["node", "server.js"]
// healthcheck.js
const http = require('http');
const options = {
host: 'localhost',
port: 3000,
path: '/health',
timeout: 2000
};
const request = http.request(options, (res) => {
if (res.statusCode === 200) {
process.exit(0);
} else {
process.exit(1);
}
});
request.on('error', () => process.exit(1));
request.end();
FROM python:3.11-slim
# 1. Use non-root user
RUN groupadd -r appuser && useradd -r -g appuser appuser
# 2. Install dependencies as root
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# 3. Copy application files
COPY --chown=appuser:appuser . .
# 4. Switch to non-root user
USER appuser
# 5. Drop unnecessary privileges
EXPOSE 8000
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "app:app"]
Additional Security Measures :
docker scan myapp:latest# Docker Swarm secrets (production)
echo "db_password_here" | docker secret create db_password -
version: '3.8'
services:
app:
image: myapp
secrets:
- db_password
environment:
- DB_PASSWORD_FILE=/run/secrets/db_password
secrets:
db_password:
external: true
Alternative: Environment Files
# docker-compose.yml
services:
app:
env_file:
- .env.production # Never commit this file
# .env.production (gitignored)
DATABASE_URL=postgresql://user:pass@db:5432/prod
SECRET_KEY=your-secret-key
services:
app:
image: myapp
deploy:
resources:
limits:
cpus: '1.0'
memory: 512M
reservations:
cpus: '0.5'
memory: 256M
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
# Command-line resource limits
docker run -d \
--memory="512m" \
--cpus="1.0" \
--restart=unless-stopped \
myapp
FROM python:3.11-slim
ENV PYTHONUNBUFFERED=1
ENV PYTHONDONTWRITEBYTECODE=1
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
postgresql-client \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application
COPY . .
# Collect static files
RUN python manage.py collectstatic --noinput
# Create non-root user
RUN useradd -m -u 1000 django && chown -R django:django /app
USER django
EXPOSE 8000
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "4", "myproject.wsgi:application"]
docker-compose.yml :
version: '3.8'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
environment:
- DEBUG=1
- DATABASE_URL=postgres://postgres:postgres@db:5432/django_dev
depends_on:
- db
db:
image: postgres:15-alpine
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: django_dev
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "4"]
# Multi-stage build for Next.js
FROM node:18-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci
FROM node:18-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM node:18-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
CMD ["node", "server.js"]
next.config.js (required for standalone output):
module.exports = {
output: 'standalone',
}
FROM node:18-alpine
WORKDIR /app
# Install dependencies
COPY package*.json ./
RUN npm ci --only=production
# Copy application
COPY . .
# Create non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
EXPOSE 3000
CMD ["node", "server.js"]
# Build stage
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json tsconfig.json ./
RUN npm ci
COPY src/ ./src/
RUN npm run build
# Production stage
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
EXPOSE 3000
CMD ["node", "dist/index.js"]
# Initialize swarm
docker swarm init
# Deploy stack
docker stack deploy -c docker-compose.yml myapp
# Scale service
docker service scale myapp_web=5
# Update service (zero-downtime)
docker service update --image myapp:2.0 myapp_web
# Remove stack
docker stack rm myapp
| Feature | Docker Compose | Docker Swarm | Kubernetes |
|---|---|---|---|
| Complexity | Low | Medium | High |
| Use Case | Local dev | Small clusters | Production at scale |
| Setup | Single file | Built-in | Separate installation |
| Scaling | Manual | Automatic | Automatic + Advanced |
| HA | No | Yes | Yes |
| Ecosystem | Limited | Docker | Massive |
When to use each :
# .github/workflows/docker.yml
name: Build and Push Docker Image
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v3
- name: Log in to Container Registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=semver,pattern={{version}}
type=sha
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Run tests
run: |
docker run --rm ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }} npm test
# .gitlab-ci.yml
stages:
- build
- test
- deploy
variables:
DOCKER_DRIVER: overlay2
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
build:
stage: build
image: docker:latest
services:
- docker:dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
test:
stage: test
script:
- docker run --rm $IMAGE_TAG npm test
deploy:
stage: deploy
script:
- docker pull $IMAGE_TAG
- docker tag $IMAGE_TAG $CI_REGISTRY_IMAGE:latest
- docker push $CI_REGISTRY_IMAGE:latest
only:
- main
# Stream logs
docker logs -f container_name
# Last 100 lines
docker logs --tail 100 container_name
# Logs since timestamp
docker logs --since 2024-01-01T10:00:00 container_name
# With timestamps
docker logs -t container_name
# Docker Compose logs
docker-compose logs -f service_name
# Interactive shell
docker exec -it container_name /bin/sh
# or
docker exec -it container_name /bin/bash
# Run single command
docker exec container_name ls -la /app
# Run as different user
docker exec -u root container_name apt-get update
# Full container details
docker inspect container_name
# Specific field (IP address)
docker inspect -f '{{.NetworkSettings.IPAddress}}' container_name
# Environment variables
docker inspect -f '{{.Config.Env}}' container_name
# Mounted volumes
docker inspect -f '{{.Mounts}}' container_name
# Real-time stats
docker stats
# Single container
docker stats container_name
# No streaming (single snapshot)
docker stats --no-stream
# List networks
docker network ls
# Inspect network
docker network inspect bridge
# Test connectivity between containers
docker exec container1 ping container2
# Check DNS resolution
docker exec container_name nslookup other_container
# Build with no cache
docker build --no-cache -t myapp .
# Show build progress
docker build --progress=plain -t myapp .
# Build specific stage
docker build --target builder -t myapp-builder .
# Inspect intermediate layers
docker history myapp:latest
docker-compose.yml runs mcp-server with a port range 8875-8895 and optional chrome profile../src:/app/src:ro) with persistent logs and temp volumes.MCP_DEBUG=true, MCP_LOG_LEVEL=DEBUG, MCP_HOST=0.0.0.0, MCP_PORT=8875.chrome (browser) and tools (dev tools container).ARG PYTHON_VERSION=3.11, install watchdog + Playwright Chromium.python -m src.dev_runner./health./opt/venv and python:3.11-slim.curl for healthcheck, sets PYTHONPATH=/app.CMD ["python", "run_api_server.py"] with /health check.# Find process using port
lsof -i :3000
# or
netstat -tulpn | grep 3000
# Kill process
kill -9 <PID>
# Or use different host port
docker run -p 3001:3000 myapp
# Check Docker is running
docker info
# Restart Docker Desktop (Mac/Windows)
# or
sudo systemctl restart docker # Linux
# Check permissions (Linux)
sudo usermod -aG docker $USER
# Log out and back in
# Remove unused containers, images, volumes
docker system prune -a --volumes
# Remove only dangling images
docker image prune
# Remove stopped containers
docker container prune
# Remove unused volumes
docker volume prune
# Check disk usage
docker system df
# Create .dockerignore
cat > .dockerignore << EOF
node_modules/
.git/
*.log
dist/
coverage/
EOF
# Build with specific context
docker build -f Dockerfile -t myapp ./src
# Check logs
docker logs container_name
# Run with interactive shell to debug
docker run -it myapp /bin/sh
# Override entrypoint
docker run -it --entrypoint /bin/sh myapp
# Check exit code
docker inspect -f '{{.State.ExitCode}}' container_name
# Run as root to debug
docker exec -u root -it container_name /bin/sh
# Fix ownership
docker exec -u root container_name chown -R appuser:appuser /app
# Or rebuild with correct permissions in Dockerfile
# Use BuildKit (faster, better caching)
DOCKER_BUILDKIT=1 docker build -t myapp .
# Multi-stage builds to reduce layers
# Order instructions by change frequency
# Use .dockerignore to exclude unnecessary files
# Set memory limits
docker run -m 512m myapp
# Monitor memory
docker stats container_name
# Check for memory leaks in application
# Use delegated consistency (Mac)
volumes:
- ./src:/app/src:delegated
# Or use named volumes instead of bind mounts
volumes:
- node_modules:/app/node_modules
# GOOD
RUN apt-get update && apt-get install -y \
package1 \
package2 \
&& rm -rf /var/lib/apt/lists/*
# BAD (creates 3 layers, apt cache remains in layer 2)
RUN apt-get update
RUN apt-get install -y package1 package2
RUN rm -rf /var/lib/apt/lists/*
# Before: 800MB
FROM node:18
COPY . .
RUN npm install
CMD ["node", "server.js"]
# After: 120MB
FROM node:18-alpine
COPY package*.json ./
RUN npm ci --only=production
COPY server.js .
CMD ["node", "server.js"]
latest)Development :
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install # Include dev dependencies
COPY . .
CMD ["npm", "run", "dev"]
Production :
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
USER node
CMD ["node", "dist/server.js"]
# Log to stdout/stderr (Docker captures these)
CMD ["node", "server.js"] # Good
# Don't log to files (lost when container stops)
CMD ["node", "server.js", ">", "app.log"] # Bad
// Application logging
console.log('Info message'); // stdout
console.error('Error message'); // stderr
// Use structured logging
console.log(JSON.stringify({
level: 'info',
timestamp: new Date().toISOString(),
message: 'Request processed',
requestId: '123'
}));
# Use ARG for build-time variables
ARG NODE_ENV=production
ENV NODE_ENV=$NODE_ENV
# Use ENV for runtime variables
ENV PORT=3000
ENV LOG_LEVEL=info
# Override at runtime
# docker run -e PORT=8080 -e LOG_LEVEL=debug myapp
HEALTHCHECK --interval=30s --timeout=3s --start-period=40s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
# docker-compose.yml
services:
app:
image: myapp
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 3s
retries: 3
start_period: 40s
// Node.js example
process.on('SIGTERM', () => {
console.log('SIGTERM received, closing server...');
server.close(() => {
console.log('Server closed');
process.exit(0);
});
});
# Use exec form to properly handle signals
CMD ["node", "server.js"] # Good
CMD node server.js # Bad (wrapped in /bin/sh, signals not forwarded)
# Images
docker build -t name:tag .
docker pull image:tag
docker push image:tag
docker images
docker rmi image:tag
# Containers
docker run -d --name container image
docker ps # Running containers
docker ps -a # All containers
docker stop container
docker start container
docker restart container
docker rm container
docker logs -f container
docker exec -it container /bin/sh
# Cleanup
docker system prune -a # Remove all unused resources
docker container prune # Remove stopped containers
docker image prune # Remove dangling images
docker volume prune # Remove unused volumes
# Compose
docker-compose up -d
docker-compose down
docker-compose logs -f
docker-compose exec service /bin/sh
docker-compose build
# docker run flags
-d # Detached (background)
-it # Interactive with TTY
-p 8080:80 # Port mapping (host:container)
--name myapp # Container name
-e VAR=value # Environment variable
-v /host:/container # Volume mount
--network name # Connect to network
--rm # Remove container on exit
-m 512m # Memory limit
--cpus 1.0 # CPU limit
FROM image:tag # Base image
WORKDIR /path # Set working directory
COPY src dst # Copy files
ADD src dst # Copy (with URL/tar support)
RUN command # Execute command
ENV KEY=value # Environment variable
EXPOSE port # Document port
CMD ["executable"] # Default command
ENTRYPOINT ["exec"] # Command prefix
VOLUME /path # Create mount point
USER username # Set user
ARG name=default # Build argument
LABEL key=value # Metadata
HEALTHCHECK CMD command # Health check
Docker containerization provides:
Key Workflows :
Next Steps :
Weekly Installs
323
Repository
GitHub Stars
18
First Seen
Jan 23, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykFail
Installed on
opencode267
gemini-cli247
claude-code238
codex236
github-copilot221
cursor217
PUA Skill:大厂文化驱动的AI Agent激励与绩效管理框架,提升Agent交付质量
2,000 周安装
Python类型注解模式指南:现代类型提示与Typing最佳实践
24 周安装
Web应用安全模式指南:OWASP Top 10防护、输入验证、身份认证与授权最佳实践
25 周安装
task-runner任务运行器:使用just简化项目命令执行,替代make的跨平台工具
30 周安装
EdgeOne Pages 一键部署:无需账户,秒级将HTML文件发布到公共URL
35 周安装
Vibe Security 安全扫描器 - 多语言代码漏洞检测与AI智能修复工具
38 周安装