Deployment Strategies
Master modern deployment strategies to achieve zero-downtime deployments, minimize risk, and ensure reliable software delivery.
Deployment Strategy Overview
Modern deployment strategies focus on minimizing risk, reducing downtime, and enabling rapid rollback capabilities while maintaining service availability.
Deployment Strategy Comparison
Strategy Selection Matrix
Strategy | Downtime | Risk | Cost | Complexity | Rollback Speed |
---|---|---|---|---|---|
Blue-Green | Zero | Low | High | Medium | Instant |
Canary | Zero | Very Low | Medium | High | Fast |
Rolling | Minimal | Medium | Low | Low | Medium |
A/B Testing | Zero | Low | Medium | High | Fast |
Blue-Green Deployment
Blue-Green deployment maintains two identical production environments, allowing instant switching between versions.
Blue-Green Architecture
Kubernetes Blue-Green Implementation
1. Blue-Green Service Configuration
# blue-green-service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-service
labels:
app: myapp
spec:
selector:
app: myapp
color: blue # Current active color
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-blue
labels:
app: myapp
color: blue
spec:
replicas: 3
selector:
matchLabels:
app: myapp
color: blue
template:
metadata:
labels:
app: myapp
color: blue
spec:
containers:
- name: myapp
image: myapp:v1.0.0
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
2. Blue-Green Deployment Script
#!/bin/bash
# blue-green-deploy.sh
set -e
APP_NAME="myapp"
NAMESPACE="default"
NEW_VERSION="$1"
REGISTRY="ghcr.io"
if [ -z "$NEW_VERSION" ]; then
echo "Usage: $0 <new-version>"
exit 1
fi
echo "Starting Blue-Green deployment for $APP_NAME version $NEW_VERSION"
# Get current active color
CURRENT_COLOR=$(kubectl get service $APP_NAME-service -o jsonpath='{.spec.selector.color}')
echo "Current active color: $CURRENT_COLOR"
# Determine new color
if [ "$CURRENT_COLOR" = "blue" ]; then
NEW_COLOR="green"
else
NEW_COLOR="blue"
fi
echo "Deploying to $NEW_COLOR environment"
# Deploy new version to inactive environment
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: $APP_NAME-$NEW_COLOR
labels:
app: $APP_NAME
color: $NEW_COLOR
spec:
replicas: 3
selector:
matchLabels:
app: $APP_NAME
color: $NEW_COLOR
template:
metadata:
labels:
app: $APP_NAME
color: $NEW_COLOR
spec:
containers:
- name: $APP_NAME
image: $REGISTRY/$APP_NAME:$NEW_VERSION
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
EOF
# Wait for new deployment to be ready
echo "Waiting for $NEW_COLOR deployment to be ready..."
kubectl rollout status deployment/$APP_NAME-$NEW_COLOR --timeout=300s
# Health check
echo "Performing health check on $NEW_COLOR environment..."
kubectl get pods -l app=$APP_NAME,color=$NEW_COLOR
# Get a pod for health check
POD_NAME=$(kubectl get pods -l app=$APP_NAME,color=$NEW_COLOR -o jsonpath='{.items[0].metadata.name}')
# Test the new deployment
echo "Testing new deployment..."
kubectl exec $POD_NAME -- curl -f http://localhost:8080/health || {
echo "Health check failed for $NEW_COLOR deployment"
exit 1
}
# Switch traffic
echo "Switching traffic from $CURRENT_COLOR to $NEW_COLOR"
kubectl patch service $APP_NAME-service -p "{\"spec\":{\"selector\":{\"color\":\"$NEW_COLOR\"}}}"
# Verify switch
echo "Verifying traffic switch..."
kubectl get service $APP_NAME-service
# Wait for old deployment to be idle (optional cleanup)
echo "Old $CURRENT_COLOR deployment is now idle and can be cleaned up"
kubectl scale deployment $APP_NAME-$CURRENT_COLOR --replicas=0
echo "Blue-Green deployment completed successfully!"
echo "Active environment: $NEW_COLOR"
echo "New version: $NEW_VERSION"
3. Blue-Green Pipeline Integration
pipeline {
agent any
environment {
APP_NAME = 'myapp'
REGISTRY = 'ghcr.io'
NAMESPACE = 'default'
}
stages {
stage('Build') {
steps {
sh 'docker build -t ${REGISTRY}/${APP_NAME}:${BUILD_NUMBER} .'
sh 'docker push ${REGISTRY}/${APP_NAME}:${BUILD_NUMBER}'
}
}
stage('Blue-Green Deploy') {
steps {
script {
// Run blue-green deployment
sh "./scripts/blue-green-deploy.sh ${BUILD_NUMBER}"
// Verify deployment
sh """
kubectl get pods -l app=${APP_NAME}
kubectl get service ${APP_NAME}-service
"""
}
}
}
stage('Smoke Tests') {
steps {
script {
// Run smoke tests against new deployment
sh """
# Get service URL
SERVICE_URL=\$(kubectl get service ${APP_NAME}-service -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
# Run smoke tests
curl -f http://\$SERVICE_URL/health
curl -f http://\$SERVICE_URL/api/version
"""
}
}
}
}
post {
failure {
script {
// Rollback on failure
sh "./scripts/blue-green-rollback.sh"
}
}
}
}
Canary Deployment
Canary deployment gradually rolls out new versions to a small percentage of users, allowing for risk assessment and quick rollback.
Canary Architecture
Kubernetes Canary Implementation
1. Canary Deployment with Istio
# canary-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-stable
labels:
app: myapp
version: stable
spec:
replicas: 9
selector:
matchLabels:
app: myapp
version: stable
template:
metadata:
labels:
app: myapp
version: stable
spec:
containers:
- name: myapp
image: myapp:v1.0.0
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-canary
labels:
app: myapp
version: canary
spec:
replicas: 1
selector:
matchLabels:
app: myapp
version: canary
template:
metadata:
labels:
app: myapp
version: canary
spec:
containers:
- name: myapp
image: myapp:v1.1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 8080
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp-vs
spec:
hosts:
- myapp-service
http:
- match:
- headers:
canary:
exact: "true"
route:
- destination:
host: myapp-service
subset: canary
weight: 100
- route:
- destination:
host: myapp-service
subset: stable
weight: 90
- destination:
host: myapp-service
subset: canary
weight: 10
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: myapp-dr
spec:
host: myapp-service
subsets:
- name: stable
labels:
version: stable
- name: canary
labels:
version: canary
2. Canary Deployment Script
#!/bin/bash
# canary-deploy.sh
set -e
APP_NAME="myapp"
NAMESPACE="default"
NEW_VERSION="$1"
CANARY_PERCENTAGE="$2"
REGISTRY="ghcr.io"
if [ -z "$NEW_VERSION" ] || [ -z "$CANARY_PERCENTAGE" ]; then
echo "Usage: $0 <new-version> <canary-percentage>"
exit 1
fi
echo "Starting Canary deployment for $APP_NAME version $NEW_VERSION with $CANARY_PERCENTAGE% traffic"
# Deploy canary version
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: $APP_NAME-canary
labels:
app: $APP_NAME
version: canary
spec:
replicas: 1
selector:
matchLabels:
app: $APP_NAME
version: canary
template:
metadata:
labels:
app: $APP_NAME
version: canary
spec:
containers:
- name: $APP_NAME
image: $REGISTRY/$APP_NAME:$NEW_VERSION
ports:
- containerPort: 8080
env:
- name: VERSION
value: "$NEW_VERSION"
EOF
# Wait for canary deployment to be ready
echo "Waiting for canary deployment to be ready..."
kubectl rollout status deployment/$APP_NAME-canary --timeout=300s
# Update traffic splitting
STABLE_PERCENTAGE=$((100 - CANARY_PERCENTAGE))
echo "Updating traffic splitting: $STABLE_PERCENTAGE% stable, $CANARY_PERCENTAGE% canary"
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: $APP_NAME-vs
spec:
hosts:
- $APP_NAME-service
http:
- route:
- destination:
host: $APP_NAME-service
subset: stable
weight: $STABLE_PERCENTAGE
- destination:
host: $APP_NAME-service
subset: canary
weight: $CANARY_PERCENTAGE
EOF
echo "Canary deployment completed!"
echo "Traffic distribution: $STABLE_PERCENTAGE% stable, $CANARY_PERCENTAGE% canary"
# Monitor canary metrics
echo "Monitoring canary metrics for 5 minutes..."
sleep 300
# Check canary health
echo "Checking canary health..."
CANARY_POD=$(kubectl get pods -l app=$APP_NAME,version=canary -o jsonpath='{.items[0].metadata.name}')
kubectl exec $CANARY_POD -- curl -f http://localhost:8080/health || {
echo "Canary health check failed"
exit 1
}
echo "Canary deployment successful!"
3. Automated Canary Analysis
# canary-analysis.yaml
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: myapp-rollout
spec:
replicas: 10
strategy:
canary:
canaryService: myapp-canary
stableService: myapp-stable
steps:
- setWeight: 10
- pause:
duration: 5m
- analysis:
templates:
- templateName: success-rate
args:
- name: service-name
value: myapp-canary
- setWeight: 20
- pause:
duration: 5m
- analysis:
templates:
- templateName: success-rate
args:
- name: service-name
value: myapp-canary
- setWeight: 50
- pause:
duration: 10m
- analysis:
templates:
- templateName: success-rate
args:
- name: service-name
value: myapp-canary
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:v1.0.0
ports:
- containerPort: 8080
---
apiVersion: argoproj.io/v1alpha1
kind: AnalysisTemplate
metadata:
name: success-rate
spec:
metrics:
- name: success-rate
interval: 30s
successCondition: result[0] >= 0.95
failureLimit: 3
provider:
prometheus:
address: http://prometheus:9090
query: |
sum(rate(http_requests_total{job="{{args.service-name}}",status!~"5.."}[5m])) /
sum(rate(http_requests_total{job="{{args.service-name}}"}[5m]))
Rolling Deployment
Rolling deployment updates instances gradually, replacing old instances with new ones one at a time.
Rolling Deployment Configuration
1. Kubernetes Rolling Update
# rolling-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:v1.0.0
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
2. Rolling Deployment Script
#!/bin/bash
# rolling-deploy.sh
set -e
APP_NAME="myapp"
NEW_VERSION="$1"
REGISTRY="ghcr.io"
if [ -z "$NEW_VERSION" ]; then
echo "Usage: $0 <new-version>"
exit 1
fi
echo "Starting Rolling deployment for $APP_NAME version $NEW_VERSION"
# Update deployment image
kubectl set image deployment/$APP_NAME $APP_NAME=$REGISTRY/$APP_NAME:$NEW_VERSION
# Wait for rollout to complete
echo "Waiting for rollout to complete..."
kubectl rollout status deployment/$APP_NAME --timeout=600s
# Verify deployment
echo "Verifying deployment..."
kubectl get pods -l app=$APP_NAME
# Check if all pods are ready
READY_PODS=$(kubectl get pods -l app=$APP_NAME --no-headers | grep Running | wc -l)
TOTAL_PODS=$(kubectl get pods -l app=$APP_NAME --no-headers | wc -l)
if [ "$READY_PODS" -eq "$TOTAL_PODS" ]; then
echo "All pods are ready!"
else
echo "Some pods are not ready: $READY_PODS/$TOTAL_PODS"
exit 1
fi
# Health check
echo "Performing health check..."
kubectl get pods -l app=$APP_NAME -o jsonpath='{.items[0].metadata.name}' | xargs -I {} kubectl exec {} -- curl -f http://localhost:8080/health
echo "Rolling deployment completed successfully!"
3. Rolling Deployment with Health Checks
pipeline {
agent any
stages {
stage('Rolling Deploy') {
steps {
script {
// Update deployment
sh "kubectl set image deployment/myapp myapp=ghcr.io/myapp:${BUILD_NUMBER}"
// Wait for rollout with timeout
sh "kubectl rollout status deployment/myapp --timeout=600s"
// Verify deployment
sh "kubectl get pods -l app=myapp"
// Health check
sh """
# Wait for all pods to be ready
kubectl wait --for=condition=ready pod -l app=myapp --timeout=300s
# Perform health check
POD_NAME=\$(kubectl get pods -l app=myapp -o jsonpath='{.items[0].metadata.name}')
kubectl exec \$POD_NAME -- curl -f http://localhost:8080/health
"""
}
}
}
}
post {
failure {
script {
// Rollback on failure
sh "kubectl rollout undo deployment/myapp"
sh "kubectl rollout status deployment/myapp --timeout=300s"
}
}
}
}
A/B Testing Deployment
A/B testing deployment allows testing different versions of an application with different user segments.
A/B Testing Architecture
A/B Testing Implementation
1. Feature Flag Configuration
# feature-flags.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: feature-flags
data:
config.yaml: |
experiments:
new-checkout-flow:
enabled: true
traffic_percentage: 50
segments:
- name: "control"
percentage: 50
config:
checkout_version: "v1"
- name: "test"
percentage: 50
config:
checkout_version: "v2"
metrics:
- conversion_rate
- bounce_rate
- time_on_page
duration: "7d"
2. A/B Testing Service
// A/B testing service
class ABTestingService {
constructor() {
this.experiments = new Map();
this.loadExperiments();
}
loadExperiments() {
// Load experiments from ConfigMap or database
const config = this.getConfig();
config.experiments.forEach(exp => {
this.experiments.set(exp.name, exp);
});
}
getVariant(userId, experimentName) {
const experiment = this.experiments.get(experimentName);
if (!experiment || !experiment.enabled) {
return null;
}
// Determine user segment based on consistent hashing
const hash = this.hash(`${userId}:${experimentName}`);
const bucket = hash % 100;
let cumulativePercentage = 0;
for (const segment of experiment.segments) {
cumulativePercentage += segment.percentage;
if (bucket < cumulativePercentage) {
return {
experiment: experimentName,
segment: segment.name,
config: segment.config
};
}
}
return null;
}
trackEvent(userId, experimentName, eventName, eventData) {
const variant = this.getVariant(userId, experimentName);
if (variant) {
// Send event to analytics service
this.sendAnalyticsEvent({
userId,
experiment: experimentName,
variant: variant.segment,
event: eventName,
data: eventData,
timestamp: new Date().toISOString()
});
}
}
hash(str) {
let hash = 0;
for (let i = 0; i < str.length; i++) {
const char = str.charCodeAt(i);
hash = ((hash << 5) - hash) + char;
hash = hash & hash; // Convert to 32-bit integer
}
return Math.abs(hash);
}
sendAnalyticsEvent(event) {
// Send to analytics service (e.g., Google Analytics, Mixpanel)
console.log('Analytics event:', event);
}
}
3. A/B Testing Middleware
// Express.js middleware for A/B testing
const express = require('express');
const app = express();
const abTestingService = new ABTestingService();
// A/B testing middleware
app.use((req, res, next) => {
const userId = req.headers['x-user-id'] || req.session.userId;
if (userId) {
// Get A/B test variant
const checkoutVariant = abTestingService.getVariant(
userId,
'new-checkout-flow'
);
if (checkoutVariant) {
req.abTest = checkoutVariant;
res.locals.checkoutVersion = checkoutVariant.config.checkout_version;
}
}
next();
});
// Route with A/B testing
app.get('/checkout', (req, res) => {
const checkoutVersion = res.locals.checkoutVersion || 'v1';
if (checkoutVersion === 'v2') {
// Render new checkout flow
res.render('checkout-v2');
} else {
// Render original checkout flow
res.render('checkout-v1');
}
});
// Track conversion events
app.post('/checkout/complete', (req, res) => {
const userId = req.headers['x-user-id'];
// Track conversion
abTestingService.trackEvent(
userId,
'new-checkout-flow',
'conversion',
{
amount: req.body.amount,
payment_method: req.body.paymentMethod
}
);
res.json({ success: true });
});
4. A/B Testing Analytics
// A/B testing analytics dashboard
class ABTestingAnalytics {
constructor() {
this.analyticsService = new AnalyticsService();
}
async getExperimentResults(experimentName, duration = '7d') {
const startDate = new Date(Date.now() - this.parseDuration(duration));
const results = await this.analyticsService.query({
experiment: experimentName,
startDate: startDate,
metrics: [
'conversion_rate',
'bounce_rate',
'average_order_value',
'time_on_page'
]
});
return this.analyzeResults(results);
}
analyzeResults(results) {
const variants = {};
results.forEach(result => {
if (!variants[result.variant]) {
variants[result.variant] = {
users: 0,
conversions: 0,
totalValue: 0,
totalTime: 0
};
}
variants[result.variant].users++;
if (result.event === 'conversion') {
variants[result.variant].conversions++;
variants[result.variant].totalValue += result.data.amount;
}
variants[result.variant].totalTime += result.data.time_on_page;
});
// Calculate metrics
Object.keys(variants).forEach(variant => {
const data = variants[variant];
data.conversion_rate = data.conversions / data.users;
data.average_order_value = data.totalValue / data.conversions;
data.average_time_on_page = data.totalTime / data.users;
});
return variants;
}
getStatisticalSignificance(control, test, confidence = 0.95) {
// Calculate statistical significance using chi-square test
const n1 = control.users;
const n2 = test.users;
const x1 = control.conversions;
const x2 = test.conversions;
const p1 = x1 / n1;
const p2 = x2 / n2;
const p = (x1 + x2) / (n1 + n2);
const se = Math.sqrt(p * (1 - p) * (1/n1 + 1/n2));
const z = (p1 - p2) / se;
// Check if result is statistically significant
const isSignificant = Math.abs(z) > 1.96; // 95% confidence
return {
isSignificant,
confidence,
zScore: z,
improvement: ((p2 - p1) / p1) * 100
};
}
}
Deployment Monitoring and Rollback
Deployment Monitoring
1. Health Check Implementation
# health-check.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: health-check-script
data:
health-check.sh: |
#!/bin/bash
set -e
# Check application health
curl -f http://localhost:8080/health || exit 1
# Check database connectivity
curl -f http://localhost:8080/health/db || exit 1
# Check external dependencies
curl -f http://localhost:8080/health/dependencies || exit 1
# Check memory usage
MEMORY_USAGE=$(ps aux | grep java | awk '{print $4}' | head -1)
if (( $(echo "$MEMORY_USAGE > 90" | bc -l) )); then
echo "Memory usage too high: $MEMORY_USAGE%"
exit 1
fi
echo "Health check passed"
2. Deployment Monitoring Pipeline
pipeline {
agent any
stages {
stage('Deploy') {
steps {
script {
// Deploy application
sh "kubectl apply -f k8s/deployment.yaml"
sh "kubectl rollout status deployment/myapp --timeout=600s"
}
}
}
stage('Health Check') {
steps {
script {
// Wait for deployment to be ready
sh "kubectl wait --for=condition=ready pod -l app=myapp --timeout=300s"
// Perform health checks
sh """
# Get pod name
POD_NAME=\$(kubectl get pods -l app=myapp -o jsonpath='{.items[0].metadata.name}')
# Run health check script
kubectl exec \$POD_NAME -- /bin/bash /scripts/health-check.sh
"""
}
}
}
stage('Smoke Tests') {
steps {
script {
// Run smoke tests
sh """
# Get service URL
SERVICE_URL=\$(kubectl get service myapp-service -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
# Run smoke tests
npm run smoke-tests -- --url http://\$SERVICE_URL
"""
}
}
}
stage('Performance Tests') {
steps {
script {
// Run performance tests
sh """
# Run load tests
k6 run --vus 10 --duration 30s tests/load-test.js
"""
}
}
}
}
post {
failure {
script {
// Automatic rollback on failure
sh "kubectl rollout undo deployment/myapp"
sh "kubectl rollout status deployment/myapp --timeout=300s"
// Notify team
slackSend(
channel: '#deployments',
color: 'danger',
message: "❌ Deployment failed for ${env.JOB_NAME} #${env.BUILD_NUMBER}. Automatic rollback initiated."
)
}
}
success {
script {
// Notify success
slackSend(
channel: '#deployments',
color: 'good',
message: "✅ Deployment successful for ${env.JOB_NAME} #${env.BUILD_NUMBER}"
)
}
}
}
}
Automated Rollback
1. Rollback Script
#!/bin/bash
# rollback.sh
set -e
APP_NAME="myapp"
NAMESPACE="default"
echo "Starting rollback for $APP_NAME"
# Check current deployment status
echo "Current deployment status:"
kubectl rollout status deployment/$APP_NAME
# Rollback to previous version
echo "Rolling back to previous version..."
kubectl rollout undo deployment/$APP_NAME
# Wait for rollback to complete
echo "Waiting for rollback to complete..."
kubectl rollout status deployment/$APP_NAME --timeout=300s
# Verify rollback
echo "Verifying rollback..."
kubectl get pods -l app=$APP_NAME
# Health check
echo "Performing health check..."
kubectl get pods -l app=$APP_NAME -o jsonpath='{.items[0].metadata.name}' | xargs -I {} kubectl exec {} -- curl -f http://localhost:8080/health
echo "Rollback completed successfully!"
2. Intelligent Rollback System
// Intelligent rollback system
class IntelligentRollback {
constructor() {
this.metricsService = new MetricsService();
this.alertingService = new AlertingService();
this.rollbackThresholds = {
error_rate: 0.05, // 5% error rate
response_time: 2000, // 2 seconds
cpu_usage: 80, // 80% CPU usage
memory_usage: 85 // 85% memory usage
};
}
async monitorDeployment(deploymentId) {
const startTime = Date.now();
const monitoringDuration = 10 * 60 * 1000; // 10 minutes
while (Date.now() - startTime < monitoringDuration) {
const metrics = await this.metricsService.getDeploymentMetrics(deploymentId);
if (this.shouldRollback(metrics)) {
await this.executeRollback(deploymentId);
break;
}
// Wait 30 seconds before next check
await this.sleep(30000);
}
}
shouldRollback(metrics) {
const violations = [];
if (metrics.error_rate > this.rollbackThresholds.error_rate) {
violations.push(`Error rate ${metrics.error_rate} exceeds threshold`);
}
if (metrics.avg_response_time > this.rollbackThresholds.response_time) {
violations.push(`Response time ${metrics.avg_response_time}ms exceeds threshold`);
}
if (metrics.cpu_usage > this.rollbackThresholds.cpu_usage) {
violations.push(`CPU usage ${metrics.cpu_usage}% exceeds threshold`);
}
if (metrics.memory_usage > this.rollbackThresholds.memory_usage) {
violations.push(`Memory usage ${metrics.memory_usage}% exceeds threshold`);
}
if (violations.length > 0) {
console.log('Rollback triggered due to:', violations);
return true;
}
return false;
}
async executeRollback(deploymentId) {
try {
console.log(`Executing rollback for deployment ${deploymentId}`);
// Execute rollback command
await this.executeCommand(`kubectl rollout undo deployment/${deploymentId}`);
// Wait for rollback to complete
await this.executeCommand(`kubectl rollout status deployment/${deploymentId} --timeout=300s`);
// Verify rollback
const status = await this.executeCommand(`kubectl get pods -l app=${deploymentId}`);
console.log('Rollback status:', status);
// Send alert
await this.alertingService.sendAlert({
type: 'rollback',
deployment: deploymentId,
message: 'Automatic rollback executed due to performance degradation',
timestamp: new Date().toISOString()
});
console.log('Rollback completed successfully');
} catch (error) {
console.error('Rollback failed:', error);
// Send critical alert
await this.alertingService.sendCriticalAlert({
type: 'rollback_failure',
deployment: deploymentId,
error: error.message,
timestamp: new Date().toISOString()
});
}
}
async executeCommand(command) {
const { exec } = require('child_process');
const { promisify } = require('util');
const execAsync = promisify(exec);
const { stdout, stderr } = await execAsync(command);
if (stderr) {
throw new Error(stderr);
}
return stdout;
}
sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}
Key Takeaways
Deployment Strategy Selection
- Blue-Green: Best for zero-downtime deployments with instant rollback
- Canary: Ideal for risk mitigation and gradual rollouts
- Rolling: Suitable for resource-efficient updates with minimal downtime
- A/B Testing: Perfect for feature validation and data-driven decisions
Best Practices Summary
- Health Checks: Implement comprehensive health monitoring
- Automated Rollback: Prepare for quick recovery from failures
- Monitoring: Continuous monitoring during and after deployment
- Testing: Thorough testing before production deployment
- Documentation: Clear documentation of deployment procedures
Implementation Considerations
- Resource Requirements: Consider infrastructure costs and capacity
- Team Expertise: Choose strategies that match team capabilities
- Business Requirements: Align with business continuity needs
- Risk Tolerance: Balance risk with deployment speed
- Monitoring Capabilities: Ensure adequate observability
Next Steps: Ready to manage environments and configurations? Continue to Section 5.2: Environment Management to learn about Infrastructure as Code and configuration management.
Mastering deployment strategies enables reliable, risk-free software delivery. In the next section, we'll explore environment management and configuration strategies.