Skip to main content

Pipeline as Code

Master Jenkins Pipeline as Code to create sophisticated, maintainable, and version-controlled CI/CD pipelines.

Pipeline as Code Overview

Pipeline as Code is a fundamental shift in how Jenkins jobs are defined, moving from point-and-click configuration to code-based definitions stored in version control.

Benefits of Pipeline as Code

Pipeline Types

1. Declarative Pipeline

  • Structured Syntax: Predefined structure with clear sections
  • Validation: Built-in validation and error checking
  • Readability: Easy to read and understand
  • Best Practices: Enforces Jenkins best practices

2. Scripted Pipeline

  • Flexibility: Full Groovy scripting capabilities
  • Dynamic Logic: Complex conditional and loop constructs
  • Custom Functions: Define custom functions and classes
  • Advanced Features: Access to Jenkins API and advanced features

Declarative Pipeline

Declarative Pipeline provides a simplified, opinionated syntax for creating Jenkins pipelines with built-in validation and best practices.

Basic Declarative Pipeline Structure

pipeline {
agent any

environment {
NODE_VERSION = '18'
DOCKER_REGISTRY = 'ghcr.io'
IMAGE_NAME = "${env.JOB_NAME}"
}

parameters {
choice(
name: 'ENVIRONMENT',
choices: ['staging', 'production'],
description: 'Target deployment environment'
)
string(
name: 'VERSION',
defaultValue: 'latest',
description: 'Application version to deploy'
)
}

stages {
stage('Checkout') {
steps {
checkout scm
}
}

stage('Build') {
steps {
sh 'npm install'
sh 'npm run build'
}
}

stage('Test') {
parallel {
stage('Unit Tests') {
steps {
sh 'npm run test:unit'
}
}
stage('Integration Tests') {
steps {
sh 'npm run test:integration'
}
}
}
}

stage('Package') {
steps {
script {
def image = docker.build("${DOCKER_REGISTRY}/${IMAGE_NAME}:${VERSION}")
docker.withRegistry('https://ghcr.io', 'docker-registry-credentials') {
image.push()
image.push('latest')
}
}
}
}

stage('Deploy') {
when {
anyOf {
branch 'main'
branch 'develop'
}
}
steps {
script {
if (params.ENVIRONMENT == 'production') {
sh 'kubectl apply -f k8s/production/'
} else {
sh 'kubectl apply -f k8s/staging/'
}
}
}
}
}

post {
always {
publishTestResults testResultsPattern: 'test-results/*.xml'
publishHTML([
allowMissing: false,
alwaysLinkToLastBuild: true,
keepAll: true,
reportDir: 'coverage',
reportFiles: 'index.html',
reportName: 'Coverage Report'
])
}
success {
slackSend(
channel: '#deployments',
color: 'good',
message: "✅ ${env.JOB_NAME} #${env.BUILD_NUMBER} deployed successfully to ${params.ENVIRONMENT}"
)
}
failure {
slackSend(
channel: '#deployments',
color: 'danger',
message: "❌ ${env.JOB_NAME} #${env.BUILD_NUMBER} failed to deploy to ${params.ENVIRONMENT}"
)
}
cleanup {
cleanWs()
}
}
}

Advanced Declarative Pipeline Features

1. Conditional Execution

pipeline {
agent any

stages {
stage('Build') {
when {
anyOf {
branch 'main'
branch 'develop'
changeRequest()
}
}
steps {
sh 'npm run build'
}
}

stage('Deploy to Staging') {
when {
branch 'develop'
}
steps {
sh 'kubectl apply -f k8s/staging/'
}
}

stage('Deploy to Production') {
when {
allOf {
branch 'main'
not {
changeRequest()
}
}
}
steps {
input message: 'Deploy to production?', ok: 'Deploy'
sh 'kubectl apply -f k8s/production/'
}
}
}
}

2. Parallel Execution

pipeline {
agent any

stages {
stage('Test') {
parallel {
stage('Unit Tests') {
steps {
sh 'npm run test:unit'
}
}
stage('Integration Tests') {
steps {
sh 'npm run test:integration'
}
}
stage('Linting') {
steps {
sh 'npm run lint'
}
}
stage('Security Scan') {
steps {
sh 'npm audit'
}
}
}
}

stage('Build Matrix') {
parallel {
stage('Build Linux') {
agent {
label 'linux'
}
steps {
sh 'npm run build:linux'
}
}
stage('Build Windows') {
agent {
label 'windows'
}
steps {
bat 'npm run build:windows'
}
}
stage('Build macOS') {
agent {
label 'macos'
}
steps {
sh 'npm run build:macos'
}
}
}
}
}
}

3. Environment-Specific Configuration

pipeline {
agent any

environment {
NODE_ENV = 'production'
DOCKER_REGISTRY = 'ghcr.io'
}

stages {
stage('Environment Setup') {
steps {
script {
if (env.BRANCH_NAME == 'main') {
env.DEPLOY_ENV = 'production'
env.API_URL = 'https://api.production.com'
} else if (env.BRANCH_NAME == 'develop') {
env.DEPLOY_ENV = 'staging'
env.API_URL = 'https://api.staging.com'
} else {
env.DEPLOY_ENV = 'development'
env.API_URL = 'https://api.dev.com'
}
}
}
}

stage('Build') {
steps {
sh '''
echo "Building for ${DEPLOY_ENV}"
echo "API URL: ${API_URL}"
npm run build:${DEPLOY_ENV}
'''
}
}
}
}

Scripted Pipeline

Scripted Pipeline provides full Groovy scripting capabilities for complex pipeline logic and advanced automation.

Basic Scripted Pipeline Structure

node {
// Define variables
def dockerImage
def dockerRegistry = 'ghcr.io'
def imageName = env.JOB_NAME.toLowerCase()

try {
stage('Checkout') {
checkout scm
}

stage('Build') {
sh 'npm install'
sh 'npm run build'
}

stage('Test') {
sh 'npm run test'
}

stage('Package') {
dockerImage = docker.build("${dockerRegistry}/${imageName}:${env.BUILD_NUMBER}")
}

stage('Push') {
docker.withRegistry('https://ghcr.io', 'docker-registry-credentials') {
dockerImage.push()
dockerImage.push('latest')
}
}

stage('Deploy') {
if (env.BRANCH_NAME == 'main') {
sh 'kubectl apply -f k8s/production/'
} else {
sh 'kubectl apply -f k8s/staging/'
}
}

} catch (Exception e) {
currentBuild.result = 'FAILURE'
throw e
} finally {
// Cleanup
cleanWs()
}
}

Advanced Scripted Pipeline Patterns

1. Dynamic Pipeline Generation

def generateStages(services) {
def stages = [:]

services.each { service ->
stages[service] = {
stage("Build ${service}") {
sh "docker build -t ${service}:${env.BUILD_NUMBER} ./services/${service}"
}

stage("Test ${service}") {
sh "docker run --rm ${service}:${env.BUILD_NUMBER} npm test"
}

stage("Deploy ${service}") {
sh "kubectl apply -f k8s/${service}/"
}
}
}

return stages
}

node {
def services = ['frontend', 'backend', 'api', 'worker']
def stages = generateStages(services)

parallel stages
}

2. Conditional Logic and Loops

node {
def environments = ['staging', 'production']
def deployResults = [:]

stage('Build') {
sh 'docker build -t myapp:latest .'
}

stage('Deploy') {
environments.each { env ->
deployResults[env] = {
stage("Deploy to ${env}") {
try {
if (env == 'production') {
input message: "Deploy to ${env}?", ok: 'Deploy'
}

sh "kubectl apply -f k8s/${env}/"

// Health check
timeout(time: 5, unit: 'MINUTES') {
sh """
kubectl rollout status deployment/myapp -n ${env}
kubectl get pods -n ${env} -l app=myapp
"""
}

return 'SUCCESS'
} catch (Exception e) {
echo "Deployment to ${env} failed: ${e.getMessage()}"
return 'FAILURE'
}
}
}
}

parallel deployResults
}

stage('Notification') {
def allSuccessful = deployResults.values().every { result ->
result == 'SUCCESS'
}

if (allSuccessful) {
slackSend(
channel: '#deployments',
color: 'good',
message: "✅ All deployments successful"
)
} else {
slackSend(
channel: '#deployments',
color: 'danger',
message: "❌ Some deployments failed"
)
}
}
}

3. Error Handling and Recovery

node {
def retryCount = 3
def currentRetry = 0

while (currentRetry < retryCount) {
try {
stage("Attempt ${currentRetry + 1}") {
sh 'npm run build'
sh 'npm run test'
sh 'npm run deploy'
}
break // Success, exit retry loop
} catch (Exception e) {
currentRetry++
if (currentRetry >= retryCount) {
throw e // Final attempt failed
}
echo "Attempt ${currentRetry} failed: ${e.getMessage()}"
echo "Retrying in 30 seconds..."
sleep 30
}
}
}

Shared Libraries

Shared Libraries enable code reuse across multiple Jenkins pipelines, promoting consistency and maintainability.

Library Structure

jenkins-shared-library/
├── vars/ # Global variables
│ ├── buildApp.groovy
│ ├── deployApp.groovy
│ ├── testApp.groovy
│ └── notify.groovy
├── src/ # Classes and utilities
│ ├── com/
│ │ └── company/
│ │ └── jenkins/
│ │ ├── BuildUtils.groovy
│ │ ├── DeployUtils.groovy
│ │ └── TestUtils.groovy
│ └── org/
│ └── jenkinsci/
│ └── plugins/
│ └── utils/
│ └── PipelineUtils.groovy
└── resources/ # Static resources
├── scripts/
│ ├── build.sh
│ ├── test.sh
│ └── deploy.sh
└── configs/
├── sonar.properties
└── docker-compose.yml

Global Variables (vars/)

1. Build Variable (vars/buildApp.groovy)

#!/usr/bin/env groovy

def call(Map config) {
def defaultConfig = [
buildTool: 'npm',
buildCommand: 'build',
testCommand: 'test',
dockerfile: 'Dockerfile',
imageName: env.JOB_NAME.toLowerCase(),
imageTag: env.BUILD_NUMBER,
registry: 'ghcr.io'
]

config = defaultConfig + config

pipeline {
agent any

stages {
stage('Checkout') {
steps {
checkout scm
}
}

stage('Install Dependencies') {
steps {
sh "${config.buildTool} install"
}
}

stage('Run Tests') {
steps {
sh "${config.buildTool} run ${config.testCommand}"
}
}

stage('Build Application') {
steps {
sh "${config.buildTool} run ${config.buildCommand}"
}
}

stage('Build Docker Image') {
steps {
script {
def image = docker.build(
"${config.registry}/${config.imageName}:${config.imageTag}",
"-f ${config.dockerfile} ."
)

docker.withRegistry("https://${config.registry}", 'docker-registry-credentials') {
image.push()
image.push('latest')
}
}
}
}
}

post {
always {
cleanWs()
}
failure {
notifySlack(
message: "❌ Build failed for ${env.JOB_NAME} #${env.BUILD_NUMBER}",
color: 'danger'
)
}
success {
notifySlack(
message: "✅ Build successful for ${env.JOB_NAME} #${env.BUILD_NUMBER}",
color: 'good'
)
}
}
}
}

2. Deploy Variable (vars/deployApp.groovy)

#!/usr/bin/env groovy

def call(Map config) {
def defaultConfig = [
environment: 'staging',
namespace: 'default',
manifestPath: 'k8s/',
imageName: env.JOB_NAME.toLowerCase(),
imageTag: env.BUILD_NUMBER,
registry: 'ghcr.io'
]

config = defaultConfig + config

stage("Deploy to ${config.environment}") {
script {
// Update image tag in manifests
sh """
sed -i 's|IMAGE_TAG|${config.imageTag}|g' ${config.manifestPath}/${config.environment}/*.yaml
sed -i 's|IMAGE_NAME|${config.registry}/${config.imageName}|g' ${config.manifestPath}/${config.environment}/*.yaml
"""

// Apply manifests
sh "kubectl apply -f ${config.manifestPath}/${config.environment}/ -n ${config.namespace}"

// Wait for rollout
sh "kubectl rollout status deployment/${config.imageName} -n ${config.namespace} --timeout=300s"

// Health check
sh "kubectl get pods -n ${config.namespace} -l app=${config.imageName}"
}
}
}

3. Notification Variable (vars/notify.groovy)

#!/usr/bin/env groovy

def call(Map config) {
def defaultConfig = [
channel: '#deployments',
color: 'good',
message: 'Pipeline completed',
webhook: 'slack-webhook'
]

config = defaultConfig + config

slackSend(
channel: config.channel,
color: config.color,
message: config.message,
teamDomain: 'your-team',
token: credentials(config.webhook)
)
}

def slack(Map config) {
call(config)
}

Classes (src/)

1. Build Utilities (src/com/company/jenkins/BuildUtils.groovy)

package com.company.jenkins

class BuildUtils implements Serializable {
def steps

BuildUtils(steps) {
this.steps = steps
}

def buildDockerImage(String imageName, String tag, String dockerfile = 'Dockerfile') {
steps.script {
def image = steps.docker.build(
"${imageName}:${tag}",
"-f ${dockerfile} ."
)
return image
}
}

def pushDockerImage(def image, String registry, String credentialsId) {
steps.docker.withRegistry("https://${registry}", credentialsId) {
image.push()
image.push('latest')
}
}

def runTests(String buildTool, String testCommand) {
steps.sh "${buildTool} run ${testCommand}"
}

def generateBuildInfo() {
return [
buildNumber: steps.env.BUILD_NUMBER,
gitCommit: steps.env.GIT_COMMIT,
gitBranch: steps.env.GIT_BRANCH,
buildTime: new Date().toString(),
buildUrl: steps.env.BUILD_URL
]
}
}

2. Deploy Utilities (src/com/company/jenkins/DeployUtils.groovy)

package com.company.jenkins

class DeployUtils implements Serializable {
def steps

DeployUtils(steps) {
this.steps = steps
}

def deployToKubernetes(String manifestPath, String namespace, Map imageConfig) {
steps.script {
// Update image references
updateImageReferences(manifestPath, imageConfig)

// Apply manifests
steps.sh "kubectl apply -f ${manifestPath} -n ${namespace}"

// Wait for rollout
steps.sh "kubectl rollout status deployment/${imageConfig.name} -n ${namespace} --timeout=300s"

// Verify deployment
verifyDeployment(imageConfig.name, namespace)
}
}

def updateImageReferences(String manifestPath, Map imageConfig) {
steps.sh """
sed -i 's|IMAGE_TAG|${imageConfig.tag}|g' ${manifestPath}/*.yaml
sed -i 's|IMAGE_NAME|${imageConfig.name}|g' ${manifestPath}/*.yaml
sed -i 's|IMAGE_REGISTRY|${imageConfig.registry}|g' ${manifestPath}/*.yaml
"""
}

def verifyDeployment(String appName, String namespace) {
steps.sh "kubectl get pods -n ${namespace} -l app=${appName}"

// Check pod status
def podStatus = steps.sh(
script: "kubectl get pods -n ${namespace} -l app=${appName} --no-headers | awk '{print \$3}'",
returnStdout: true
).trim()

if (podStatus != 'Running') {
steps.error("Deployment verification failed. Pod status: ${podStatus}")
}
}
}

Using Shared Libraries

1. Library Configuration

@Library('jenkins-shared-library@main') _

pipeline {
agent any

stages {
stage('Build and Deploy') {
steps {
script {
def buildUtils = new com.company.jenkins.BuildUtils(this)
def deployUtils = new com.company.jenkins.DeployUtils(this)

// Build application
def image = buildUtils.buildDockerImage(
'myapp',
env.BUILD_NUMBER,
'Dockerfile'
)

buildUtils.pushDockerImage(
image,
'ghcr.io',
'docker-registry-credentials'
)

// Deploy to staging
deployUtils.deployToKubernetes(
'k8s/staging',
'staging',
[
name: 'myapp',
tag: env.BUILD_NUMBER,
registry: 'ghcr.io'
]
)
}
}
}
}
}

2. Global Variable Usage

@Library('jenkins-shared-library@main') _

buildApp([
buildTool: 'npm',
buildCommand: 'build:prod',
testCommand: 'test:ci',
imageName: 'myapp',
registry: 'ghcr.io'
])

deployApp([
environment: 'staging',
namespace: 'staging',
imageName: 'myapp',
manifestPath: 'k8s/staging'
])

notifySlack([
message: "✅ Application deployed successfully",
color: 'good'
])

Pipeline Best Practices

1. Structure and Organization

pipeline {
agent any

options {
buildDiscarder(logRotator(numToKeepStr: '10'))
timeout(time: 30, unit: 'MINUTES')
timestamps()
ansiColor('xterm')
}

environment {
NODE_VERSION = '18'
DOCKER_REGISTRY = 'ghcr.io'
}

parameters {
choice(
name: 'ENVIRONMENT',
choices: ['staging', 'production'],
description: 'Target environment'
)
}

stages {
// Well-defined stages with clear purposes
stage('Preparation') {
steps {
// Setup and validation
}
}

stage('Build') {
steps {
// Build application
}
}

stage('Test') {
parallel {
stage('Unit Tests') {
steps {
// Unit testing
}
}
stage('Integration Tests') {
steps {
// Integration testing
}
}
}
}

stage('Deploy') {
when {
anyOf {
branch 'main'
branch 'develop'
}
}
steps {
// Deployment logic
}
}
}

post {
always {
// Always executed cleanup
}
success {
// Success notifications
}
failure {
// Failure handling
}
unstable {
// Unstable build handling
}
}
}

2. Error Handling

pipeline {
agent any

stages {
stage('Risky Operation') {
steps {
script {
try {
sh 'risky-command'
} catch (Exception e) {
echo "Operation failed: ${e.getMessage()}"
currentBuild.result = 'UNSTABLE'
// Continue with alternative approach
}
}
}
}
}

post {
always {
// Cleanup regardless of result
}
failure {
// Specific failure handling
emailext (
subject: "Build Failed: ${env.JOB_NAME} #${env.BUILD_NUMBER}",
body: "Build failed. Check console output for details.",
to: "${env.CHANGE_AUTHOR_EMAIL}"
)
}
}
}

3. Performance Optimization

pipeline {
agent any

options {
// Parallel execution
parallelsAlwaysFailFast()
}

stages {
stage('Parallel Operations') {
parallel {
stage('Build') {
steps {
sh 'npm run build'
}
}
stage('Lint') {
steps {
sh 'npm run lint'
}
}
stage('Security Scan') {
steps {
sh 'npm audit'
}
}
}
}

stage('Caching') {
steps {
script {
// Use Docker layer caching
def image = docker.build(
"myapp:${env.BUILD_NUMBER}",
"--cache-from myapp:latest ."
)
}
}
}
}
}

Key Takeaways

Pipeline as Code Benefits

  1. Version Control: All pipeline definitions stored in version control
  2. Code Review: Pipeline changes go through code review process
  3. Reusability: Shared libraries promote code reuse and consistency
  4. Maintainability: Centralized pipeline management and updates
  5. Testing: Pipelines can be tested and validated before deployment

Best Practices Summary

  • Use Declarative: Prefer declarative syntax for most use cases
  • Shared Libraries: Create reusable components for common operations
  • Error Handling: Implement comprehensive error handling and recovery
  • Performance: Optimize with parallel execution and caching
  • Documentation: Document pipeline logic and configuration

Common Patterns

  • Multi-Environment: Environment-specific deployment configurations
  • Parallel Execution: Run independent operations simultaneously
  • Conditional Logic: Smart pipeline behavior based on context
  • Error Recovery: Retry mechanisms and fallback strategies
  • Notifications: Comprehensive notification and alerting

Next Steps: Ready for advanced integrations? Continue to Section 3.3: Integration & Extension to learn how to integrate Jenkins with modern tools and platforms.


Mastering Pipeline as Code enables you to create sophisticated, maintainable CI/CD pipelines that can adapt to complex enterprise requirements.