Skip to main content

Chapter 6: Docker Production Deployment

Authored by syscook.dev

What is Docker Production Deployment?

Docker production deployment involves running containerized applications in production environments with high availability, scalability, security, and monitoring. It includes orchestration, CI/CD pipelines, monitoring, logging, and disaster recovery strategies.

Key Concepts:

  • Orchestration: Managing multiple containers across multiple hosts
  • CI/CD: Continuous integration and deployment pipelines
  • Monitoring: Observability and performance tracking
  • Logging: Centralized log collection and analysis
  • Security: Production-grade security measures
  • Scaling: Horizontal and vertical scaling strategies
  • High Availability: Ensuring service uptime and reliability

Why Use Docker for Production?

1. Consistent Deployments

Docker ensures identical environments across all stages of deployment.

# Same image runs identically everywhere
docker run -d --name web \
-e NODE_ENV=production \
-p 80:3000 \
myapp:latest

Benefits:

  • Eliminates environment-specific issues
  • Reduces deployment failures
  • Simplifies rollback procedures

2. Scalability and Resource Efficiency

Docker enables efficient resource utilization and easy scaling.

# docker-compose.prod.yml
version: '3.8'
services:
web:
image: myapp:latest
deploy:
replicas: 5
resources:
limits:
memory: 512M
cpus: '0.5'
reservations:
memory: 256M
cpus: '0.25'
restart: unless-stopped

3. High Availability and Fault Tolerance

Docker provides built-in mechanisms for service resilience.

version: '3.8'
services:
web:
image: myapp:latest
deploy:
replicas: 3
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3

Production Architecture Patterns

1. Microservices Architecture

Service Mesh with Istio

# docker-compose.istio.yml
version: '3.8'
services:
istio-proxy:
image: istio/proxyv2:latest
ports:
- "15000:15000"
- "15001:15001"
- "15020:15020"
environment:
- ISTIO_META_INTERCEPTION_MODE=REDIRECT
- ISTIO_META_PROXY_PORT=15001
volumes:
- /var/run/docker.sock:/var/run/docker.sock

web:
image: myapp:latest
depends_on:
- istio-proxy
environment:
- ISTIO_META_INTERCEPTION_MODE=REDIRECT

API Gateway Pattern

version: '3.8'
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./ssl:/etc/nginx/ssl:ro
depends_on:
- api-gateway
restart: unless-stopped

api-gateway:
image: myapp/api-gateway:latest
environment:
- USER_SERVICE_URL=http://user-service:3001
- ORDER_SERVICE_URL=http://order-service:3002
- PAYMENT_SERVICE_URL=http://payment-service:3003
depends_on:
- user-service
- order-service
- payment-service
restart: unless-stopped

user-service:
image: myapp/user-service:latest
environment:
- DATABASE_URL=postgresql://user:password@user-db:5432/users
depends_on:
- user-db
restart: unless-stopped

order-service:
image: myapp/order-service:latest
environment:
- DATABASE_URL=postgresql://user:password@order-db:5432/orders
depends_on:
- order-db
restart: unless-stopped

payment-service:
image: myapp/payment-service:latest
environment:
- DATABASE_URL=postgresql://user:password@payment-db:5432/payments
depends_on:
- payment-db
restart: unless-stopped

2. Database Clustering and High Availability

PostgreSQL Cluster with Replication

version: '3.8'
services:
postgres-master:
image: postgres:13
environment:
- POSTGRES_DB=myapp
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_REPLICATION_USER=replicator
- POSTGRES_REPLICATION_PASSWORD=replicator_password
volumes:
- postgres_master_data:/var/lib/postgresql/data
- ./postgres-master.conf:/etc/postgresql/postgresql.conf:ro
command: postgres -c config_file=/etc/postgresql/postgresql.conf
restart: unless-stopped

postgres-slave1:
image: postgres:13
environment:
- POSTGRES_DB=myapp
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_REPLICATION_USER=replicator
- POSTGRES_REPLICATION_PASSWORD=replicator_password
volumes:
- postgres_slave1_data:/var/lib/postgresql/data
- ./postgres-slave.conf:/etc/postgresql/postgresql.conf:ro
command: postgres -c config_file=/etc/postgresql/postgresql.conf
depends_on:
- postgres-master
restart: unless-stopped

postgres-slave2:
image: postgres:13
environment:
- POSTGRES_DB=myapp
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_REPLICATION_USER=replicator
- POSTGRES_REPLICATION_PASSWORD=replicator_password
volumes:
- postgres_slave2_data:/var/lib/postgresql/data
- ./postgres-slave.conf:/etc/postgresql/postgresql.conf:ro
command: postgres -c config_file=/etc/postgresql/postgresql.conf
depends_on:
- postgres-master
restart: unless-stopped

pgpool:
image: pgpool/pgpool:latest
environment:
- PGPOOL_BACKEND_HOSTNAME0=postgres-master
- PGPOOL_BACKEND_PORT0=5432
- PGPOOL_BACKEND_HOSTNAME1=postgres-slave1
- PGPOOL_BACKEND_PORT1=5432
- PGPOOL_BACKEND_HOSTNAME2=postgres-slave2
- PGPOOL_BACKEND_PORT2=5432
ports:
- "5432:5432"
depends_on:
- postgres-master
- postgres-slave1
- postgres-slave2
restart: unless-stopped

volumes:
postgres_master_data:
postgres_slave1_data:
postgres_slave2_data:

3. Load Balancing and High Availability

HAProxy Load Balancer

version: '3.8'
services:
haproxy:
image: haproxy:alpine
ports:
- "80:80"
- "443:443"
- "8404:8404" # Stats page
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
- ./ssl:/etc/ssl:ro
depends_on:
- web1
- web2
- web3
restart: unless-stopped

web1:
image: myapp:latest
environment:
- NODE_ENV=production
restart: unless-stopped

web2:
image: myapp:latest
environment:
- NODE_ENV=production
restart: unless-stopped

web3:
image: myapp:latest
environment:
- NODE_ENV=production
restart: unless-stopped

CI/CD Pipeline Integration

1. GitHub Actions Pipeline

Complete CI/CD Pipeline

# .github/workflows/docker-deploy.yml
name: Docker Production Deployment

on:
push:
branches: [main]
pull_request:
branches: [main]

env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}

jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2

- name: Build and test
run: |
docker build -t ${{ env.IMAGE_NAME }}:test .
docker run --rm ${{ env.IMAGE_NAME }}:test npm test

- name: Security scan
uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ env.IMAGE_NAME }}:test
format: 'sarif'
output: 'trivy-results.sarif'

build:
needs: test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v3

- name: Log in to Container Registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Extract metadata
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=sha,prefix={{branch}}-
type=raw,value=latest,enable={{is_default_branch}}

- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max

deploy:
needs: build
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v3

- name: Deploy to production
run: |
# Deploy to production environment
docker-compose -f docker-compose.prod.yml pull
docker-compose -f docker-compose.prod.yml up -d

- name: Health check
run: |
# Wait for deployment to be ready
sleep 30
curl -f http://localhost/health || exit 1

2. GitLab CI/CD Pipeline

GitLab CI Configuration

# .gitlab-ci.yml
stages:
- test
- build
- deploy

variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"

services:
- docker:dind

before_script:
- docker info

test:
stage: test
script:
- docker build -t $CI_REGISTRY_IMAGE:test .
- docker run --rm $CI_REGISTRY_IMAGE:test npm test
- docker run --rm $CI_REGISTRY_IMAGE:test npm run lint

build:
stage: build
script:
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA $CI_REGISTRY_IMAGE:latest
- docker push $CI_REGISTRY_IMAGE:latest
only:
- main

deploy:
stage: deploy
script:
- docker-compose -f docker-compose.prod.yml pull
- docker-compose -f docker-compose.prod.yml up -d
only:
- main
when: manual

3. Jenkins Pipeline

Jenkinsfile

pipeline {
agent any

environment {
DOCKER_REGISTRY = 'your-registry.com'
IMAGE_NAME = 'myapp'
}

stages {
stage('Test') {
steps {
sh 'docker build -t ${IMAGE_NAME}:test .'
sh 'docker run --rm ${IMAGE_NAME}:test npm test'
sh 'docker run --rm ${IMAGE_NAME}:test npm run lint'
}
}

stage('Build') {
steps {
sh 'docker build -t ${DOCKER_REGISTRY}/${IMAGE_NAME}:${BUILD_NUMBER} .'
sh 'docker push ${DOCKER_REGISTRY}/${IMAGE_NAME}:${BUILD_NUMBER}'
sh 'docker tag ${DOCKER_REGISTRY}/${IMAGE_NAME}:${BUILD_NUMBER} ${DOCKER_REGISTRY}/${IMAGE_NAME}:latest'
sh 'docker push ${DOCKER_REGISTRY}/${IMAGE_NAME}:latest'
}
}

stage('Deploy') {
steps {
sh 'docker-compose -f docker-compose.prod.yml pull'
sh 'docker-compose -f docker-compose.prod.yml up -d'
}
}
}

post {
always {
sh 'docker system prune -f'
}
}
}

Monitoring and Observability

1. Prometheus and Grafana Stack

Monitoring Stack

version: '3.8'
services:
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--storage.tsdb.retention.time=200h'
- '--web.enable-lifecycle'
restart: unless-stopped

grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
restart: unless-stopped

node-exporter:
image: prom/node-exporter:latest
ports:
- "9100:9100"
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.rootfs=/rootfs'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
restart: unless-stopped

cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
ports:
- "8080:8080"
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
restart: unless-stopped

volumes:
prometheus_data:
grafana_data:

2. ELK Stack for Logging

Elasticsearch, Logstash, and Kibana

version: '3.8'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.0
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
restart: unless-stopped

logstash:
image: docker.elastic.co/logstash/logstash:7.15.0
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
- ./logstash/config:/usr/share/logstash/config:ro
ports:
- "5044:5044"
depends_on:
- elasticsearch
restart: unless-stopped

kibana:
image: docker.elastic.co/kibana/kibana:7.15.0
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- "5601:5601"
depends_on:
- elasticsearch
restart: unless-stopped

filebeat:
image: docker.elastic.co/beats/filebeat:7.15.0
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
depends_on:
- logstash
restart: unless-stopped

volumes:
elasticsearch_data:

3. Application Performance Monitoring

APM with Jaeger

version: '3.8'
services:
jaeger:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268:14268"
environment:
- COLLECTOR_OTLP_ENABLED=true
restart: unless-stopped

zipkin:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
restart: unless-stopped

app:
image: myapp:latest
environment:
- JAEGER_AGENT_HOST=jaeger
- JAEGER_AGENT_PORT=14268
- JAEGER_SERVICE_NAME=myapp
depends_on:
- jaeger
restart: unless-stopped

Security Best Practices

1. Container Security

Security Scanning and Hardening

# Security-hardened Dockerfile
FROM node:18-alpine AS base
RUN apk add --no-cache dumb-init
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001

FROM base AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force

FROM base AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM base AS runtime
WORKDIR /app
ENV NODE_ENV=production
COPY --from=deps --chown=nextjs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nextjs:nodejs /app/dist ./dist
USER nextjs
EXPOSE 3000
CMD ["dumb-init", "node", "dist/app.js"]

Security Scanning in CI/CD

# Security scanning step
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ env.IMAGE_NAME }}:test
format: 'sarif'
output: 'trivy-results.sarif'

- name: Upload Trivy scan results
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: 'trivy-results.sarif'

2. Network Security

Secure Network Configuration

version: '3.8'
services:
web:
image: myapp:latest
networks:
- frontend
restart: unless-stopped

api:
image: myapp-api:latest
networks:
- frontend
- backend
restart: unless-stopped

db:
image: postgres:13
networks:
- backend
restart: unless-stopped

networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true

3. Secrets Management

Docker Secrets

version: '3.8'
services:
web:
image: myapp:latest
secrets:
- db_password
- api_key
environment:
- DB_PASSWORD_FILE=/run/secrets/db_password
- API_KEY_FILE=/run/secrets/api_key
restart: unless-stopped

db:
image: postgres:13
secrets:
- db_password
environment:
- POSTGRES_PASSWORD_FILE=/run/secrets/db_password
restart: unless-stopped

secrets:
db_password:
external: true
api_key:
external: true

Disaster Recovery and Backup

1. Data Backup Strategy

Automated Backup System

version: '3.8'
services:
backup:
image: postgres:13
volumes:
- postgres_data:/var/lib/postgresql/data
- ./backups:/backups
environment:
- PGPASSWORD=password
command: |
sh -c '
while true; do
pg_dump -h db -U user myapp > /backups/backup-$(date +%Y%m%d-%H%M%S).sql
find /backups -name "backup-*.sql" -mtime +7 -delete
sleep 86400
done
'
depends_on:
- db
restart: unless-stopped

db:
image: postgres:13
environment:
- POSTGRES_DB=myapp
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped

volumes:
postgres_data:

2. High Availability Setup

Multi-Node Deployment

version: '3.8'
services:
web:
image: myapp:latest
deploy:
replicas: 3
placement:
constraints:
- node.role == worker
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
restart: unless-stopped

db:
image: postgres:13
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
restart: unless-stopped

Performance Optimization

1. Resource Optimization

Resource Limits and Requests

version: '3.8'
services:
web:
image: myapp:latest
deploy:
resources:
limits:
memory: 512M
cpus: '0.5'
reservations:
memory: 256M
cpus: '0.25'
restart: unless-stopped

2. Caching Strategies

Redis Caching

version: '3.8'
services:
redis:
image: redis:alpine
volumes:
- redis_data:/data
restart: unless-stopped

web:
image: myapp:latest
environment:
- REDIS_URL=redis://redis:6379
depends_on:
- redis
restart: unless-stopped

volumes:
redis_data:

Summary

Docker production deployment requires careful planning and implementation:

  • Architecture patterns like microservices and API gateways
  • CI/CD pipelines for automated testing and deployment
  • Monitoring and observability with Prometheus, Grafana, and ELK stack
  • Security measures including scanning, secrets management, and network isolation
  • Disaster recovery with backup strategies and high availability
  • Performance optimization through resource management and caching

By following these best practices, you can deploy Docker applications that are secure, scalable, and maintainable in production environments.


This completes the SysCook Docker tutorial series. You now have comprehensive knowledge of Docker from basics to production deployment.