Skip to main content

Practical Project - First CI/CD Pipeline

Build your first complete CI/CD pipeline from design to implementation with this comprehensive hands-on project.

Project Overview

Project Goals

By the end of this project, you will have:

  • Designed a Complete CI/CD Pipeline: From code commit to production deployment
  • Implemented Pipeline Architecture: Using industry-standard tools and practices
  • Configured Multiple Environments: Development, testing, and production setups
  • Established Success Metrics: KPIs and monitoring for pipeline performance
  • Applied Best Practices: Security, testing, and deployment strategies

Project Architecture

Project Setup

Prerequisites

Before starting this project, ensure you have:

Required Tools

  • Git: Version control system
  • Docker: Containerization platform
  • Node.js: JavaScript runtime (for our sample application)
  • GitHub Account: For repository and CI/CD hosting
  • Cloud Provider Account: AWS, Azure, or GCP (for deployment)

Required Knowledge

  • Basic Git commands and workflow
  • Docker fundamentals
  • Basic command line operations
  • Understanding of web applications

Sample Application

We'll use a simple Node.js web application as our project:

Application Structure

sample-app/
├── src/
│ ├── app.js
│ ├── routes/
│ │ ├── api.js
│ │ └── health.js
│ └── tests/
│ ├── unit/
│ │ └── api.test.js
│ └── integration/
│ └── app.test.js
├── Dockerfile
├── docker-compose.yml
├── package.json
├── .github/
│ └── workflows/
│ └── ci-cd.yml
└── README.md

Sample Application Code

package.json

{
"name": "sample-cicd-app",
"version": "1.0.0",
"description": "Sample application for CI/CD tutorial",
"main": "src/app.js",
"scripts": {
"start": "node src/app.js",
"test": "jest",
"test:unit": "jest --testPathPattern=unit",
"test:integration": "jest --testPathPattern=integration",
"lint": "eslint src/",
"lint:fix": "eslint src/ --fix"
},
"dependencies": {
"express": "^4.18.2",
"cors": "^2.8.5",
"helmet": "^7.0.0"
},
"devDependencies": {
"jest": "^29.5.0",
"supertest": "^6.3.3",
"eslint": "^8.42.0"
}
}

src/app.js

const express = require('express');
const cors = require('cors');
const helmet = require('helmet');
const apiRoutes = require('./routes/api');
const healthRoutes = require('./routes/health');

const app = express();
const PORT = process.env.PORT || 3000;

// Middleware
app.use(helmet());
app.use(cors());
app.use(express.json());

// Routes
app.use('/api', apiRoutes);
app.use('/health', healthRoutes);

// Error handling middleware
app.use((err, req, res, next) => {
console.error(err.stack);
res.status(500).json({ error: 'Something went wrong!' });
});

// Start server
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});

module.exports = app;

Pipeline Design Phase

1. Pipeline Architecture Design

Stage 1: Build Stage

Objectives:

  • Validate code quality
  • Install dependencies
  • Create deployable artifacts
  • Perform security checks

Stage 2: Test Stage

Objectives:

  • Execute automated tests
  • Measure code coverage
  • Validate code quality metrics
  • Enforce quality gates

Stage 3: Package Stage

Objectives:

  • Create containerized application
  • Perform security scanning
  • Store artifacts in registry
  • Version management

Stage 4: Deploy Stage

Objectives:

  • Deploy to staging environment
  • Validate deployment
  • Deploy to production
  • Monitor application health

2. Environment Configuration

Development Environment

# docker-compose.dev.yml
version: '3.8'
services:
app:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=development
- PORT=3000
volumes:
- ./src:/app/src
depends_on:
- redis
- postgres

redis:
image: redis:alpine
ports:
- "6379:6379"

postgres:
image: postgres:13
environment:
- POSTGRES_DB=sample_app_dev
- POSTGRES_USER=dev_user
- POSTGRES_PASSWORD=dev_password
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data

volumes:
postgres_data:

Staging Environment

# docker-compose.staging.yml
version: '3.8'
services:
app:
image: ${DOCKER_REGISTRY}/sample-cicd-app:${VERSION}
ports:
- "3000:3000"
environment:
- NODE_ENV=staging
- PORT=3000
- REDIS_URL=redis://redis:6379
- DATABASE_URL=postgresql://staging_user:staging_password@postgres:5432/sample_app_staging
depends_on:
- redis
- postgres

redis:
image: redis:alpine
ports:
- "6379:6379"

postgres:
image: postgres:13
environment:
- POSTGRES_DB=sample_app_staging
- POSTGRES_USER=staging_user
- POSTGRES_PASSWORD=staging_password
ports:
- "5432:5432"

Production Environment

# docker-compose.prod.yml
version: '3.8'
services:
app:
image: ${DOCKER_REGISTRY}/sample-cicd-app:${VERSION}
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- PORT=3000
- REDIS_URL=${REDIS_URL}
- DATABASE_URL=${DATABASE_URL}
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
depends_on:
- redis
- postgres

redis:
image: redis:alpine
restart: unless-stopped

postgres:
image: postgres:13
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
restart: unless-stopped
volumes:
- postgres_data:/var/lib/postgresql/data

3. Docker Configuration

Dockerfile

# Multi-stage build for optimization
FROM node:18-alpine AS builder

# Set working directory
WORKDIR /app

# Copy package files
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production && npm cache clean --force

# Copy source code
COPY src/ ./src/

# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001

# Change ownership
RUN chown -R nodejs:nodejs /app
USER nodejs

# Expose port
EXPOSE 3000

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1

# Start application
CMD ["npm", "start"]

Implementation Phase

1. GitHub Actions Workflow

Create .github/workflows/ci-cd.yml:

name: CI/CD Pipeline

on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]

env:
NODE_VERSION: '18'
DOCKER_REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}

jobs:
# Build and Test Stage
build-and-test:
runs-on: ubuntu-latest

steps:
- name: Checkout code
uses: actions/checkout@v3

- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'

- name: Install dependencies
run: npm ci

- name: Run linting
run: npm run lint

- name: Run unit tests
run: npm run test:unit

- name: Run integration tests
run: npm run test:integration

- name: Generate coverage report
run: npm run test -- --coverage

- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ./coverage/lcov.info

- name: Security audit
run: npm audit --audit-level moderate

- name: Build Docker image
run: |
docker build -t ${{ env.IMAGE_NAME }}:${{ github.sha }} .
docker build -t ${{ env.IMAGE_NAME }}:latest .

- name: Test Docker image
run: |
docker run --rm -d -p 3000:3000 --name test-container ${{ env.IMAGE_NAME }}:${{ github.sha }}
sleep 10
curl -f http://localhost:3000/health
docker stop test-container

# Deploy to Staging
deploy-staging:
needs: build-and-test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/develop'

steps:
- name: Checkout code
uses: actions/checkout@v3

- name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ${{ env.DOCKER_REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Push Docker image
run: |
docker tag ${{ env.IMAGE_NAME }}:${{ github.sha }} ${{ env.DOCKER_REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
docker tag ${{ env.IMAGE_NAME }}:${{ github.sha }} ${{ env.DOCKER_REGISTRY }}/${{ env.IMAGE_NAME }}:staging
docker push ${{ env.DOCKER_REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
docker push ${{ env.DOCKER_REGISTRY }}/${{ env.IMAGE_NAME }}:staging

- name: Deploy to staging
run: |
echo "Deploying to staging environment..."
# Add your staging deployment commands here
# Example: kubectl apply -f k8s/staging/
# Example: docker-compose -f docker-compose.staging.yml up -d

# Deploy to Production
deploy-production:
needs: build-and-test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
environment: production

steps:
- name: Checkout code
uses: actions/checkout@v3

- name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ${{ env.DOCKER_REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Push Docker image
run: |
docker tag ${{ env.IMAGE_NAME }}:${{ github.sha }} ${{ env.DOCKER_REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
docker tag ${{ env.IMAGE_NAME }}:${{ github.sha }} ${{ env.DOCKER_REGISTRY }}/${{ env.IMAGE_NAME }}:latest
docker push ${{ env.DOCKER_REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
docker push ${{ env.DOCKER_REGISTRY }}/${{ env.IMAGE_NAME }}:latest

- name: Deploy to production
run: |
echo "Deploying to production environment..."
# Add your production deployment commands here
# Example: kubectl apply -f k8s/production/
# Example: docker-compose -f docker-compose.prod.yml up -d

- name: Health check
run: |
echo "Performing post-deployment health check..."
# Add health check commands here
# Example: curl -f https://your-app.com/health

2. Test Implementation

Unit Tests

// src/tests/unit/api.test.js
const request = require('supertest');
const app = require('../../app');

describe('API Routes', () => {
describe('GET /api/status', () => {
it('should return status information', async () => {
const response = await request(app)
.get('/api/status')
.expect(200);

expect(response.body).toHaveProperty('status');
expect(response.body).toHaveProperty('timestamp');
expect(response.body.status).toBe('ok');
});
});

describe('POST /api/data', () => {
it('should accept valid data', async () => {
const testData = { message: 'Hello World' };

const response = await request(app)
.post('/api/data')
.send(testData)
.expect(200);

expect(response.body).toHaveProperty('success');
expect(response.body.success).toBe(true);
});

it('should reject invalid data', async () => {
const response = await request(app)
.post('/api/data')
.send({})
.expect(400);

expect(response.body).toHaveProperty('error');
});
});
});

Integration Tests

// src/tests/integration/app.test.js
const request = require('supertest');
const app = require('../../app');

describe('Application Integration Tests', () => {
describe('Health Check', () => {
it('should respond to health check', async () => {
const response = await request(app)
.get('/health')
.expect(200);

expect(response.body).toHaveProperty('status');
expect(response.body.status).toBe('healthy');
});
});

describe('CORS', () => {
it('should handle CORS preflight requests', async () => {
const response = await request(app)
.options('/api/status')
.expect(204);

expect(response.headers).toHaveProperty('access-control-allow-origin');
});
});
});

3. Monitoring and Alerting

Health Check Endpoint

// src/routes/health.js
const express = require('express');
const router = express.Router();

router.get('/', async (req, res) => {
try {
// Check database connection
const dbStatus = await checkDatabase();

// Check external services
const externalStatus = await checkExternalServices();

const healthStatus = {
status: 'healthy',
timestamp: new Date().toISOString(),
version: process.env.APP_VERSION || '1.0.0',
services: {
database: dbStatus,
external: externalStatus
}
};

res.status(200).json(healthStatus);
} catch (error) {
res.status(503).json({
status: 'unhealthy',
timestamp: new Date().toISOString(),
error: error.message
});
}
});

async function checkDatabase() {
// Add database health check logic
return { status: 'healthy', responseTime: '10ms' };
}

async function checkExternalServices() {
// Add external service health check logic
return { status: 'healthy', services: [] };
}

module.exports = router;

Success Metrics and KPIs

Pipeline Performance Metrics

1. Build Performance

# Metrics to track
build_metrics:
- build_time: "Target: < 5 minutes"
- build_success_rate: "Target: > 95%"
- queue_time: "Target: < 1 minute"
- resource_utilization: "Target: < 80%"

2. Test Performance

# Test metrics
test_metrics:
- test_execution_time: "Target: < 3 minutes"
- test_success_rate: "Target: > 98%"
- code_coverage: "Target: > 80%"
- test_flakiness: "Target: < 2%"

3. Deployment Performance

# Deployment metrics
deployment_metrics:
- deployment_frequency: "Target: Daily"
- deployment_time: "Target: < 10 minutes"
- deployment_success_rate: "Target: > 99%"
- rollback_time: "Target: < 5 minutes"

Quality Gates

1. Code Quality Gates

quality_gates:
code_coverage:
threshold: 80
action: "fail_if_below"

code_quality:
sonar_quality_gate: "pass"
action: "fail_if_fail"

security_vulnerabilities:
severity: "high"
action: "fail_if_any"

2. Performance Gates

performance_gates:
build_time:
threshold: "5 minutes"
action: "warn_if_exceeded"

test_time:
threshold: "3 minutes"
action: "fail_if_exceeded"

deployment_time:
threshold: "10 minutes"
action: "fail_if_exceeded"

Monitoring Dashboard

Key Metrics to Monitor

Project Execution Steps

Step 1: Repository Setup

  1. Create GitHub repository
  2. Initialize project structure
  3. Set up branch protection rules
  4. Configure required status checks

Step 2: Environment Setup

  1. Set up development environment
  2. Configure staging environment
  3. Prepare production environment
  4. Set up monitoring and logging

Step 3: Pipeline Implementation

  1. Create GitHub Actions workflow
  2. Implement build and test stages
  3. Configure deployment stages
  4. Set up quality gates

Step 4: Testing and Validation

  1. Test pipeline with sample commits
  2. Validate all stages work correctly
  3. Test rollback procedures
  4. Verify monitoring and alerting

Step 5: Documentation and Training

  1. Document pipeline processes
  2. Create troubleshooting guides
  3. Train team members
  4. Establish maintenance procedures

Troubleshooting Common Issues

Build Failures

# Common build issues and solutions
build_issues:
dependency_conflicts:
solution: "Update package.json, clear npm cache"

environment_variables:
solution: "Check GitHub secrets configuration"

resource_limits:
solution: "Optimize build process, use caching"

Test Failures

# Common test issues and solutions
test_issues:
flaky_tests:
solution: "Add retry logic, improve test stability"

environment_differences:
solution: "Use Docker for consistent environments"

test_timeout:
solution: "Increase timeout, optimize test execution"

Deployment Issues

# Common deployment issues and solutions
deployment_issues:
permission_errors:
solution: "Check service account permissions"

resource_conflicts:
solution: "Implement proper resource management"

health_check_failures:
solution: "Verify health check endpoints"

Key Takeaways

Project Success Factors

  1. Start Simple: Begin with basic pipeline and add complexity gradually
  2. Automate Everything: Minimize manual intervention in the pipeline
  3. Monitor Continuously: Implement comprehensive monitoring and alerting
  4. Test Thoroughly: Ensure comprehensive test coverage and quality gates
  5. Document Everything: Maintain clear documentation for all processes

Lessons Learned

  • Pipeline Design: Plan architecture before implementation
  • Environment Consistency: Use containers for consistent environments
  • Quality Gates: Implement strict quality gates early
  • Monitoring: Set up monitoring before going to production
  • Team Training: Ensure team understands the pipeline and processes

Next Steps

After completing this project:

  1. Optimize Performance: Fine-tune pipeline for better performance
  2. Add Advanced Features: Implement advanced deployment strategies
  3. Scale Operations: Apply learnings to larger, more complex projects
  4. Share Knowledge: Document and share learnings with the team

Congratulations! You've successfully designed and implemented your first CI/CD pipeline. This foundation will serve as the basis for more advanced CI/CD implementations in your DevOps journey.


This practical project provides hands-on experience with real-world CI/CD implementation. Apply these concepts and patterns to your own projects to build robust, scalable CI/CD systems.