DevOps CI/CD Jenkins Pipelines and GitHub Actions
With the fundamentals of CI/CD covered, advanced pipeline work focuses on building pipelines that are fast, secure, maintainable, and capable of handling complex multi-stage deployment workflows. This topic covers Jenkins Pipeline as Code with Jenkinsfile, advanced GitHub Actions patterns, reusable workflow strategies, and multi-environment deployment architectures.
Jenkins Pipeline as Code – Jenkinsfile
A Jenkinsfile defines a Jenkins pipeline in code, stored in the repository alongside the application. This means the pipeline itself goes through version control, code review, and testing — it is treated as a first-class software artifact.
Declarative vs Scripted Pipeline
Jenkins supports two pipeline syntaxes:
- Declarative: Structured, opinionated, easier to read and validate. Recommended for most use cases.
- Scripted: Full Groovy code inside a
nodeblock. Maximum flexibility but harder to maintain.
Comprehensive Jenkinsfile Example
pipeline {
agent any
environment {
APP_NAME = 'webapp'
ECR_REGISTRY = '123456789.dkr.ecr.us-east-1.amazonaws.com'
IMAGE_TAG = "${env.GIT_COMMIT[0..7]}"
AWS_REGION = 'us-east-1'
}
options {
timeout(time: 30, unit: 'MINUTES')
buildDiscarder(logRotator(numToKeepStr: '20'))
disableConcurrentBuilds()
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Test') {
parallel {
stage('Unit Tests') {
steps {
sh 'npm ci'
sh 'npm run test:unit -- --coverage'
}
post {
always {
junit 'test-results/unit/*.xml'
publishCoverage adapters: [istanbulCoberturaAdapter('coverage/cobertura-coverage.xml')]
}
}
}
stage('Lint') {
steps {
sh 'npm run lint'
}
}
stage('Security Scan') {
steps {
sh 'npm audit --audit-level=high'
}
}
}
}
stage('Build Docker Image') {
steps {
script {
dockerImage = docker.build("${ECR_REGISTRY}/${APP_NAME}:${IMAGE_TAG}")
}
}
}
stage('Scan Image') {
steps {
sh """
trivy image \
--exit-code 1 \
--severity HIGH,CRITICAL \
--no-progress \
${ECR_REGISTRY}/${APP_NAME}:${IMAGE_TAG}
"""
}
}
stage('Push to ECR') {
steps {
script {
docker.withRegistry("https://${ECR_REGISTRY}", 'ecr:us-east-1:aws-credentials') {
dockerImage.push(IMAGE_TAG)
dockerImage.push('latest')
}
}
}
}
stage('Deploy to Staging') {
steps {
withKubeConfig([credentialsId: 'k8s-staging-config']) {
sh """
kubectl set image deployment/${APP_NAME} \
${APP_NAME}=${ECR_REGISTRY}/${APP_NAME}:${IMAGE_TAG} \
-n staging
kubectl rollout status deployment/${APP_NAME} -n staging --timeout=5m
"""
}
}
}
stage('Integration Tests') {
steps {
sh 'npm run test:integration -- --base-url=https://staging.myapp.com'
}
}
stage('Deploy to Production') {
when {
branch 'main'
}
input {
message "Deploy version ${IMAGE_TAG} to production?"
ok "Deploy"
submitter "senior-engineers"
}
steps {
withKubeConfig([credentialsId: 'k8s-prod-config']) {
sh """
kubectl set image deployment/${APP_NAME} \
${APP_NAME}=${ECR_REGISTRY}/${APP_NAME}:${IMAGE_TAG} \
-n production
kubectl rollout status deployment/${APP_NAME} -n production --timeout=10m
"""
}
}
}
}
post {
success {
slackSend(
color: 'good',
message: ":white_check_mark: *${APP_NAME}* version `${IMAGE_TAG}` deployed to production by ${currentBuild.getBuildCauses()[0].userId}"
)
}
failure {
slackSend(
color: 'danger',
message: ":x: *${APP_NAME}* pipeline FAILED at stage: ${env.STAGE_NAME}. <${BUILD_URL}|View Build>"
)
}
always {
cleanWs()
}
}
}Advanced GitHub Actions Patterns
Reusable Workflows
Reusable workflows extract common pipeline logic into a shared YAML file that multiple repositories can call — eliminating duplication across tens or hundreds of service pipelines.
# .github/workflows/reusable-docker-build.yml (in a central repo)
name: Reusable Docker Build and Push
on:
workflow_call:
inputs:
image-name:
required: true
type: string
dockerfile:
required: false
type: string
default: Dockerfile
environment:
required: true
type: string
secrets:
AWS_ACCESS_KEY_ID:
required: true
AWS_SECRET_ACCESS_KEY:
required: true
outputs:
image-tag:
description: "The pushed image tag"
value: ${{ jobs.build.outputs.image-tag }}
jobs:
build:
runs-on: ubuntu-latest
outputs:
image-tag: ${{ steps.meta.outputs.tags }}
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Build, scan and push image
id: meta
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
docker build -f ${{ inputs.dockerfile }} \
-t $ECR_REGISTRY/${{ inputs.image-name }}:$IMAGE_TAG .
trivy image --exit-code 1 --severity CRITICAL \
$ECR_REGISTRY/${{ inputs.image-name }}:$IMAGE_TAG
docker push $ECR_REGISTRY/${{ inputs.image-name }}:$IMAGE_TAG
echo "tags=$ECR_REGISTRY/${{ inputs.image-name }}:$IMAGE_TAG" >> $GITHUB_OUTPUTCalling this reusable workflow from any service repository:
# In the payment service repo: .github/workflows/ci.yml
name: Payment Service CI
on:
push:
branches: [main]
jobs:
build-and-push:
uses: myorg/shared-workflows/.github/workflows/reusable-docker-build.yml@main
with:
image-name: payment-service
environment: production
secrets:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}Matrix Builds
Matrix builds run the same job across multiple configurations simultaneously — testing on multiple Node.js versions, operating systems, or environments in parallel.
jobs:
test:
name: Test on Node ${{ matrix.node }} / ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
node: [18, 20, 22]
os: [ubuntu-latest, windows-latest, macos-latest]
fail-fast: false # Run all combinations even if one fails
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node }}
- run: npm ci
- run: npm testCaching Dependencies
Caching dramatically speeds up pipelines by reusing downloaded dependencies across runs:
- name: Cache node_modules
uses: actions/cache@v4
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install dependencies
run: npm ciEnvironment Protection Rules
GitHub Environments define deployment targets (staging, production) with protection rules — required reviewers, deployment windows, and secret scoping:
jobs:
deploy-staging:
runs-on: ubuntu-latest
environment: staging # Uses staging environment secrets and rules
steps:
- name: Deploy to staging
run: ./deploy.sh staging
deploy-production:
needs: [deploy-staging, integration-tests]
runs-on: ubuntu-latest
environment: production # Requires manual approval from approved reviewers
steps:
- name: Deploy to production
run: ./deploy.sh productionMulti-Stage Deployment Architecture
A production-grade pipeline typically flows through multiple environments with automatic gates between stages:
Code Commit (feature branch)
↓
PR Pipeline: lint + unit tests + security scan
↓ (PR approved and merged to main)
Main Branch Pipeline:
Stage 1: Build Docker image + tag with commit SHA
Stage 2: Push to ECR
Stage 3: Deploy to DEV automatically
Stage 4: Run smoke tests on DEV
↓ (automatic on DEV success)
Stage 5: Deploy to STAGING automatically
Stage 6: Run full integration + performance tests
↓ (manual approval gate)
Stage 7: Deploy to PRODUCTION (canary 10%)
Stage 8: Monitor for 10 minutes
↓ (automatic on healthy metrics)
Stage 9: Full production rollout (100%)Dynamic Environments
Advanced CI/CD creates a unique environment for every pull request — making feature branches testable in isolation on real infrastructure.
name: PR Environment
on:
pull_request:
types: [opened, synchronize]
jobs:
deploy-pr-env:
runs-on: ubuntu-latest
steps:
- name: Deploy PR environment
env:
PR_NUMBER: ${{ github.event.number }}
run: |
helm upgrade --install \
webapp-pr-$PR_NUMBER \
./helm/webapp \
--set image.tag=${{ github.sha }} \
--set ingress.host=pr-$PR_NUMBER.staging.myapp.com \
-n pr-environments \
--create-namespace
- name: Comment PR with environment URL
uses: actions/github-script@v7
with:
script: |
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: '🚀 PR environment deployed: https://pr-${{ github.event.number }}.staging.myapp.com'
})When the PR closes, a separate workflow destroys the environment to reclaim resources.
Pipeline Metrics and Optimization
Measure pipeline performance continuously. Slow or flaky pipelines are productivity killers:
- Pipeline duration: Target under 10 minutes for the main branch pipeline.
- Pipeline success rate: Should be above 95%. Below this indicates flaky tests or instability.
- Cache hit rate: High hit rates indicate effective caching — wasted cache keys need investigation.
- Optimization techniques: Parallelize independent jobs, cache aggressively, use spot/preemptible runners for cost, move slow tests to a separate nightly job.
Summary
- Jenkinsfiles define complex multi-stage pipelines as code — stored in Git, reviewed, and version-controlled.
- Parallel stages dramatically reduce pipeline duration by running independent jobs simultaneously.
- GitHub Actions reusable workflows centralize common pipeline logic for dozens of services.
- Matrix builds test against multiple versions and platforms in a single workflow definition.
- Environment protection rules enforce manual approval gates before sensitive deployments.
- Dynamic PR environments give developers isolated, reviewable, deployable versions of every feature branch.
- Measuring and optimizing pipeline metrics is ongoing work — fast, reliable pipelines are a competitive advantage.
