Docker Complete Guide - Containerization for Modern Development
Docker has revolutionized how we develop, package, and deploy applications. This comprehensive guide covers everything you need to know about Docker containerization, from basic concepts to advanced deployment strategies.
What is Docker?
Docker is a containerization platform that allows you to package applications and their dependencies into lightweight, portable containers. Containers ensure that your application runs consistently across different environments.
Key Benefits of Docker
- Consistency: Same environment across development, testing, and production
- Portability: Run anywhere Docker is installed
- Isolation: Applications run in isolated environments
- Scalability: Easy to scale applications horizontally
- Resource Efficiency: Lightweight compared to virtual machines
Docker Fundamentals
Core Concepts
- Images: Read-only templates used to create containers
- Containers: Running instances of Docker images
- Dockerfile: Text file with instructions to build images
- Registry: Storage and distribution system for Docker images
- Docker Compose: Tool for defining multi-container applications
Installation
macOS and Windows
Download Docker Desktop from docker.com
Linux (Ubuntu/Debian)
# Update package index
sudo apt-get update
# Install required packages
sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release
# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Add Docker repository
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
# Add user to docker group
sudo usermod -aG docker $USER
Working with Docker Images
Basic Commands
# Pull an image from Docker Hub
docker pull nginx:latest
# List local images
docker images
# Remove an image
docker rmi nginx:latest
# Search for images
docker search python
Running Containers
# Run a container
docker run -d -p 8080:80 --name my-nginx nginx
# Run with environment variables
docker run -d -p 3000:3000 -e NODE_ENV=production --name my-app node:16
# Run interactively
docker run -it ubuntu:20.04 /bin/bash
# Run in background (detached)
docker run -d --name my-container nginx
# Stop a container
docker stop my-container
# Remove a container
docker rm my-container
Creating Docker Images with Dockerfile
Basic Dockerfile
# Use official Python runtime as base image
FROM python:3.9-slim
# Set working directory
WORKDIR /app
# Copy requirements first (for better caching)
COPY requirements.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Expose port
EXPOSE 8000
# Set environment variables
ENV PYTHONPATH=/app
ENV FLASK_APP=app.py
# Run the application
CMD ["python", "app.py"]
Multi-stage Dockerfile
# Build stage
FROM node:16-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
# Production stage
FROM nginx:alpine
# Copy built files from builder stage
COPY --from=builder /app/dist /usr/share/nginx/html
# Copy custom nginx config
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Best Practices for Dockerfiles
1. Use Specific Base Images
# Good - specific version
FROM node:16.14.2-alpine
# Bad - latest tag
FROM node:latest
2. Leverage Build Cache
# Copy package files first
COPY package*.json ./
RUN npm install
# Then copy source code
COPY . .
3. Use .dockerignore
node_modules
npm-debug.log
.git
.gitignore
README.md
.env
.nyc_output
coverage
.nyc_output
.coverage
4. Minimize Layers
# Good - combine RUN commands
RUN apt-get update && \
apt-get install -y python3 && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Bad - separate RUN commands
RUN apt-get update
RUN apt-get install -y python3
RUN apt-get clean
5. Use Non-root User
# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
# Switch to non-root user
USER nextjs
Docker Compose for Multi-Container Applications
Basic docker-compose.yml
version: '3.8'
services:
web:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
depends_on:
- db
- redis
db:
image: postgres:13
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis:6-alpine
ports:
- "6379:6379"
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- web
volumes:
postgres_data:
Advanced Docker Compose Configuration
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
target: production
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://user:password@db:5432/myapp
- REDIS_URL=redis://redis:6379
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
restart: unless-stopped
networks:
- app-network
db:
image: postgres:13
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d myapp"]
interval: 10s
timeout: 5s
retries: 5
networks:
- app-network
redis:
image: redis:6-alpine
command: redis-server --appendonly yes
volumes:
- redis_data:/data
networks:
- app-network
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./ssl:/etc/nginx/ssl
depends_on:
- app
networks:
- app-network
volumes:
postgres_data:
redis_data:
networks:
app-network:
driver: bridge
Real-World Examples
1. Node.js Application
Dockerfile
FROM node:16-alpine
# Create app directory
WORKDIR /usr/src/app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy source code
COPY . .
# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
# Change ownership
RUN chown -R nextjs:nodejs /usr/src/app
USER nextjs
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
# Start application
CMD ["npm", "start"]
docker-compose.yml
version: '3.8'
services:
app:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://user:password@db:5432/myapp
depends_on:
- db
restart: unless-stopped
db:
image: postgres:13
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
postgres_data:
2. Python FastAPI Application
Dockerfile
FROM python:3.9-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Set work directory
WORKDIR /app
# Install system dependencies
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
postgresql-client \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy project
COPY . .
# Create non-root user
RUN adduser --disabled-password --gecos '' appuser
RUN chown -R appuser:appuser /app
USER appuser
# Expose port
EXPOSE 8000
# Run the application
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
3. React Application with Nginx
Dockerfile
# Build stage
FROM node:16-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM nginx:alpine
# Copy built files
COPY --from=builder /app/build /usr/share/nginx/html
# Copy nginx configuration
COPY nginx.conf /etc/nginx/conf.d/default.conf
# Expose port
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
nginx.conf
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
# Handle client-side routing
location / {
try_files $uri $uri/ /index.html;
}
# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
}
Docker Networking
Custom Networks
# Create custom network
docker network create my-network
# Run containers on custom network
docker run -d --name web --network my-network nginx
docker run -d --name db --network my-network postgres
Network Types
- bridge: Default network for containers
- host: Use host's network directly
- overlay: For multi-host networking
- macvlan: Assign MAC address to container
Docker Volumes and Data Persistence
Volume Types
1. Named Volumes
# Create named volume
docker volume create my-volume
# Use named volume
docker run -d -v my-volume:/data postgres
2. Bind Mounts
# Mount host directory
docker run -d -v /host/path:/container/path nginx
3. tmpfs Mounts
# In-memory storage
docker run -d --tmpfs /tmp nginx
Volume Management
# List volumes
docker volume ls
# Inspect volume
docker volume inspect my-volume
# Remove volume
docker volume rm my-volume
# Remove unused volumes
docker volume prune
Docker Security Best Practices
1. Use Official Images
# Good
FROM node:16-alpine
# Bad
FROM some-random-user/node:latest
2. Scan Images for Vulnerabilities
# Scan image
docker scan nginx:latest
# Use tools like Trivy
trivy image nginx:latest
3. Run as Non-root User
# Create and use non-root user
RUN adduser -D -s /bin/sh appuser
USER appuser
4. Use Multi-stage Builds
# Build stage
FROM node:16 AS builder
# ... build steps
# Production stage
FROM node:16-alpine
COPY --from=builder /app/dist /app
5. Limit Container Resources
# docker-compose.yml
services:
app:
image: nginx
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
Monitoring and Logging
Container Logs
# View logs
docker logs container-name
# Follow logs
docker logs -f container-name
# View logs with timestamps
docker logs -t container-name
Health Checks
# Dockerfile health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
# docker-compose.yml health check
services:
app:
image: nginx
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost"]
interval: 30s
timeout: 10s
retries: 3
Deployment Strategies
1. Docker Swarm
# Initialize swarm
docker swarm init
# Deploy stack
docker stack deploy -c docker-compose.yml myapp
# Scale service
docker service scale myapp_web=3
2. Kubernetes
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:latest
ports:
- containerPort: 3000
3. Cloud Platforms
- AWS ECS: Container service
- Google Cloud Run: Serverless containers
- Azure Container Instances: Simple container deployment
Troubleshooting Common Issues
1. Container Won't Start
# Check container logs
docker logs container-name
# Run container interactively
docker run -it image-name /bin/bash
2. Permission Issues
# Fix file permissions
RUN chown -R appuser:appuser /app
USER appuser
3. Network Connectivity
# Check network
docker network ls
docker network inspect network-name
# Test connectivity
docker exec container-name ping other-container
4. Volume Mount Issues
# Check volume mounts
docker inspect container-name
# Verify file permissions
docker exec container-name ls -la /path
Conclusion
Docker has become an essential tool for modern software development and deployment. By mastering Docker concepts, Dockerfile best practices, and Docker Compose, you can:
- Create consistent development environments
- Simplify application deployment
- Improve scalability and reliability
- Reduce environment-related issues
Next Steps
Ready to dive deeper into containerization and DevOps? Check out our comprehensive tutorials:
What's your experience with Docker? Share your tips and challenges in the comments below!