Chapter 2: Docker Images & Containers
Authored by syscook.dev
What are Docker Images and Containers?
Docker images are read-only templates used to create containers. They contain everything needed to run an application: code, runtime, libraries, environment variables, and configuration files. Docker containers are running instances of images that can be started, stopped, moved, and deleted.
Key Concepts:
- Image: Immutable template with application and dependencies
- Container: Running instance of an image
- Layer: Each instruction in a Dockerfile creates a new layer
- Registry: Centralized repository for storing and sharing images
- Tag: Version identifier for images (e.g.,
nginx:1.20
) - Repository: Collection of related images with different tags
Why Use Images and Containers?
1. Consistent Application Packaging
Images ensure your application runs identically across all environments.
# Same image works everywhere
docker run -p 8080:80 nginx:1.20
# Identical behavior on development, staging, and production
Benefits:
- Eliminates "works on my machine" issues
- Ensures reproducible deployments
- Simplifies environment management
2. Efficient Resource Utilization
Containers share the host OS kernel, making them lightweight and fast.
# Compare container vs VM resource usage
docker stats
# Shows minimal resource consumption
Resource Efficiency:
- Startup Time: Containers start in seconds
- Memory Usage: Shared kernel reduces memory footprint
- CPU Overhead: Minimal compared to VMs
3. Easy Scaling and Management
Containers can be easily replicated, updated, and managed.
# Scale application horizontally
docker run -d --name web1 nginx
docker run -d --name web2 nginx
docker run -d --name web3 nginx
# Update all instances
docker stop web1 web2 web3
docker rm web1 web2 web3
docker run -d --name web1 nginx:1.21
docker run -d --name web2 nginx:1.21
docker run -d --name web3 nginx:1.21
Docker Images Deep Dive
1. Image Layers and Caching
Docker images are built using a layered filesystem. Each instruction in a Dockerfile creates a new layer.
# Example Dockerfile showing layers
FROM ubuntu:20.04 # Layer 1: Base image
RUN apt-get update # Layer 2: Update package lists
RUN apt-get install -y nginx # Layer 3: Install nginx
COPY index.html /var/www/html/ # Layer 4: Copy files
EXPOSE 80 # Layer 5: Expose port
CMD ["nginx", "-g", "daemon off;"] # Layer 6: Start command
Layer Benefits:
- Caching: Unchanged layers are reused
- Efficiency: Only changed layers are rebuilt
- Sharing: Common layers shared between images
2. Image Management Commands
Pull and List Images
# Pull an image from registry
docker pull nginx:1.20
# List all images
docker images
# List images with specific format
docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}\t{{.CreatedAt}}"
# Search for images
docker search nginx
# Inspect image details
docker inspect nginx:1.20
Image History and Layers
# Show image history
docker history nginx:1.20
# Show image layers
docker image inspect nginx:1.20 --format='{{.RootFS.Layers}}'
# Show image size breakdown
docker system df -v
3. Image Tagging and Versioning
Tag Management
# Tag an image
docker tag nginx:1.20 my-nginx:latest
docker tag nginx:1.20 my-nginx:v1.20
docker tag nginx:1.20 my-nginx:stable
# List tagged images
docker images my-nginx
# Remove specific tag
docker rmi my-nginx:latest
Multi-Architecture Images
# Create multi-arch manifest
docker buildx create --name multiarch --use
# Build for multiple platforms
docker buildx build --platform linux/amd64,linux/arm64 \
-t myapp:latest --push .
# Inspect manifest
docker manifest inspect myapp:latest
Docker Containers Deep Dive
1. Container Lifecycle Management
Create and Start Containers
# Create container without starting
docker create --name my-container nginx:1.20
# Start existing container
docker start my-container
# Create and start in one command
docker run -d --name my-container nginx:1.20
# Run container interactively
docker run -it --name interactive-container ubuntu:20.04 bash
Container States and Management
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# Stop container
docker stop my-container
# Start stopped container
docker start my-container
# Restart container
docker restart my-container
# Pause container
docker pause my-container
# Unpause container
docker unpause my-container
2. Container Configuration
Port Mapping
# Map host port to container port
docker run -p 8080:80 nginx
# Map multiple ports
docker run -p 8080:80 -p 8443:443 nginx
# Map all ports
docker run -P nginx
# Map specific interface
docker run -p 127.0.0.1:8080:80 nginx
Environment Variables
# Set environment variables
docker run -e MYSQL_ROOT_PASSWORD=secret mysql
# Set multiple environment variables
docker run -e MYSQL_ROOT_PASSWORD=secret \
-e MYSQL_DATABASE=myapp \
-e MYSQL_USER=user \
mysql
# Use environment file
docker run --env-file .env mysql
Volume Mounting
# Mount host directory to container
docker run -v /host/path:/container/path nginx
# Mount with read-only access
docker run -v /host/path:/container/path:ro nginx
# Use named volumes
docker run -v my-volume:/container/path nginx
# Mount current directory
docker run -v $(pwd):/app nginx
3. Container Networking
Network Configuration
# List networks
docker network ls
# Create custom network
docker network create my-network
# Run container on specific network
docker run --network my-network nginx
# Connect container to network
docker network connect my-network my-container
# Disconnect container from network
docker network disconnect my-network my-container
Container Communication
# Run multiple containers on same network
docker run -d --name web --network my-network nginx
docker run -d --name db --network my-network mysql
# Containers can communicate using container names
# From web container: curl http://db:3306
Advanced Container Operations
1. Container Resource Management
CPU and Memory Limits
# Set memory limit
docker run -m 512m nginx
# Set CPU limit
docker run --cpus="1.5" nginx
# Set both limits
docker run -m 512m --cpus="1.5" nginx
# Set memory and CPU limits with restart policy
docker run -d --name web \
--memory="512m" \
--cpus="1.0" \
--restart=unless-stopped \
nginx
Resource Monitoring
# Monitor container resources
docker stats
# Monitor specific containers
docker stats web-container db-container
# Monitor with custom format
docker stats --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}"
2. Container Logging
Log Management
# View container logs
docker logs my-container
# Follow logs in real-time
docker logs -f my-container
# View logs with timestamps
docker logs -t my-container
# View last 100 lines
docker logs --tail 100 my-container
# View logs since specific time
docker logs --since "2023-01-01T00:00:00" my-container
Log Configuration
# Configure logging driver
docker run --log-driver json-file \
--log-opt max-size=10m \
--log-opt max-file=3 \
nginx
# Use syslog driver
docker run --log-driver syslog \
--log-opt syslog-address=udp://localhost:514 \
nginx
3. Container Health Checks
Health Check Configuration
# In Dockerfile
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost/ || exit 1
Health Check Management
# Check container health
docker inspect --format='{{.State.Health.Status}}' my-container
# View health check logs
docker inspect --format='{{range .State.Health.Log}}{{.Output}}{{end}}' my-container
# Run health check manually
docker exec my-container curl -f http://localhost/
Container Debugging and Troubleshooting
1. Interactive Debugging
Access Running Container
# Execute command in running container
docker exec -it my-container bash
# Execute command as root
docker exec -it --user root my-container bash
# Execute specific command
docker exec my-container ps aux
docker exec my-container env
Container Inspection
# Inspect container configuration
docker inspect my-container
# Inspect specific properties
docker inspect --format='{{.State.Status}}' my-container
docker inspect --format='{{.NetworkSettings.IPAddress}}' my-container
docker inspect --format='{{.Mounts}}' my-container
2. Container Cleanup
Cleanup Commands
# Remove stopped containers
docker container prune
# Remove all containers
docker rm $(docker ps -aq)
# Remove containers with specific pattern
docker ps -aq --filter "name=my-*" | xargs docker rm
# Force remove running containers
docker rm -f my-container
System Cleanup
# Remove unused images
docker image prune
# Remove all unused images
docker image prune -a
# Remove unused volumes
docker volume prune
# Remove unused networks
docker network prune
# Remove everything unused
docker system prune -a
Best Practices
1. Image Management
- Use specific tags instead of
latest
- Regularly update base images
- Keep images small and focused
- Use multi-stage builds for complex applications
- Scan images for vulnerabilities
2. Container Management
- Use meaningful container names
- Set appropriate resource limits
- Implement health checks
- Use restart policies appropriately
- Monitor container logs and metrics
3. Security Considerations
- Run containers as non-root users
- Use read-only filesystems when possible
- Limit container capabilities
- Scan images for vulnerabilities
- Use secrets management for sensitive data
Common Use Cases
1. Web Application Deployment
# Run web application with database
docker run -d --name database \
-e MYSQL_ROOT_PASSWORD=secret \
-e MYSQL_DATABASE=myapp \
mysql:8.0
docker run -d --name webapp \
-p 8080:80 \
--link database:db \
my-webapp:latest
2. Development Environment
# Run development environment
docker run -it --name dev \
-v $(pwd):/workspace \
-p 3000:3000 \
node:18 bash
3. Microservices Architecture
# Run multiple microservices
docker run -d --name api-gateway nginx:alpine
docker run -d --name user-service node:18
docker run -d --name order-service python:3.9
docker run -d --name database postgres:13
Summary
Docker images and containers are the fundamental building blocks of containerized applications:
- Images provide immutable templates with all dependencies
- Containers are running instances that can be managed dynamically
- Layered filesystem enables efficient caching and sharing
- Resource management allows fine-grained control over CPU and memory
- Networking and storage provide isolation and data persistence
- Health checks and monitoring ensure application reliability
Understanding these concepts is essential for effective Docker usage and container orchestration.
This tutorial is part of the SysCook DevOps series. Continue to the next chapter to learn about Dockerfile best practices.