Skip to content

Jenkins Complete Master Guide

Jenkins is an open-source automation server that enables developers to build, test, and deploy applications automatically through continuous integration and continuous delivery (CI/CD) pipelines.


Installation Guide

# Create a Docker volume for persistent data
docker volume create jenkins_home

# Run Jenkins container
docker run -d \
  --name jenkins \
  -p 8080:8080 \
  -p 50000:50000 \
  -v jenkins_home:/var/jenkins_home \
  jenkins/jenkins:lts

# Get the initial admin password
docker exec jenkins cat /var/jenkins_home/secrets/initialAdminPassword

# Access Jenkins at http://localhost:8080
# Enter the admin password and install suggested plugins

Install Jenkins on Ubuntu

# Add Jenkins repository
sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \
  https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key

echo "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]" \
  https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
  /etc/apt/sources.list.d/jenkins.list > /dev/null

# Install Java and Jenkins
sudo apt update
sudo apt install -y openjdk-17-jdk jenkins

# Start Jenkins service
sudo systemctl start jenkins
sudo systemctl enable jenkins

# Get initial password
sudo cat /var/lib/jenkins/secrets/initialAdminPassword

# Access at http://your-server:8080

Install Jenkins on macOS

# Using Homebrew
brew install jenkins-lts

# Start Jenkins
brew services start jenkins-lts

# Get initial password
cat ~/.jenkins/secrets/initialAdminPassword

# Access at http://localhost:8080

BEGINNER LEVEL: Your First Jenkins Jobs

Scenario 1: Setting Up Jenkins for the First Time

Unlocking Jenkins and installing essential plugins

sequenceDiagram
    participant User as Developer
    participant Docker as Docker Container
    participant Jenkins as Jenkins Server
    participant Browser as Web Browser
    participant Plugins as Plugin Manager
    participant Admin as Admin User

    User->>Docker: docker run jenkins/jenkins:lts
    Docker->>Jenkins: Start Jenkins server
    Jenkins->>Jenkins: Generate initial admin password
    User->>Browser: Open http://localhost:8080
    Browser->>Jenkins: Request unlock page
    Jenkins-->>User: Show password prompt
    User->>Docker: docker exec jenkins cat /secrets/initialAdminPassword
    Docker-->>User: Display password
    User->>Browser: Enter admin password
    Browser->>Jenkins: Submit password
    Jenkins->>Plugins: Show plugin selection
    User->>Browser: Select "Install suggested plugins"
    Browser->>Plugins: Install plugins
    Plugins->>Jenkins: Install Git, Pipeline, etc.
    Plugins-->>User: Create first admin user
    User->>Browser: Fill admin credentials
    Browser->>Admin: Create admin user
    Admin->>Jenkins: Setup complete
    Jenkins-->>User: Welcome to Jenkins!

Code:

# After starting Jenkins container, get the initial password
docker exec jenkins cat /var/jenkins_home/secrets/initialAdminPassword

# Expected output (example):
# a1b2c3d4e5f678901234567890123456

# In the Jenkins Setup Wizard:
# 1. Paste the password into the Unlock Jenkins page
# 2. Click "Install suggested plugins"
# 3. Wait for plugins to install (takes 5-10 minutes)
# 4. Create first admin user:
#    Username: admin
#    Password: securepassword123
#    Full name: Jenkins Admin
#    Email: admin@example.com
# 5. Click "Save and Continue"
# 6. Set Jenkins URL: http://localhost:8080
# 7. Click "Save and Finish"

# Access Jenkins home page - you're now ready to create jobs!


Scenario 2: Creating Your First Freestyle Job

Building a simple "Hello World" job manually

sequenceDiagram
    participant User as Developer
    participant Browser as Jenkins UI
    participant Job as Jenkins Job
    participant Executor as Jenkins Executor
    participant Build as Build Process

    User->>Browser: Click "New Item"
    Browser->>User: Show job types
    User->>Browser: Enter "hello-world-job", select "Freestyle project"
    Browser->>Job: Create job configuration
    User->>Browser: Add build step "Execute shell"
    Browser->>Job: Configure shell step
    User->>Browser: Enter: echo "Hello from Jenkins!"
    Browser->>Job: Save configuration
    User->>Browser: Click "Build Now"
    Browser->>Job: Trigger build
    Job->>Executor: Request executor
    Executor->>Build: Execute shell command
    Build->>Build: echo "Hello from Jenkins!"
    Build-->>Job: Build success
    Job->>Browser: Show green checkmark
    Browser-->>User: Build #1 successful!

Code:

# No command-line code for this - it's all UI-driven:

# Step-by-step in Jenkins UI:
# 1. Click "New Item" on left sidebar
# 2. Enter job name: hello-world-job
# 3. Select "Freestyle project"
# 4. Click "OK"
# 5. In configuration:
#    - Description: "My first Jenkins job"
#    - Scroll to "Build" section
#    - Click "Add build step"
#    - Select "Execute shell"
#    - In command box:
      echo "Hello from Jenkins!"
      echo "Build number: $BUILD_NUMBER"
      echo "Build ID: $BUILD_ID"
      date
      whoami
      pwd

# 6. Click "Save"
# 7. Click "Build Now" on left sidebar
# 8. Click build #1 in "Build History"
# 9. Click "Console Output" to see:
#    Started by user admin
#    Running as SYSTEM
#    Building in workspace /var/jenkins_home/workspace/hello-world-job
#    [hello-world-job] $ /bin/sh -xe /tmp/jenkins123456.sh
#    + echo 'Hello from Jenkins!'
#    Hello from Jenkins!
#    + echo 'Build number: 1'
#    Build number: 1
#    + date
#    Thu Nov 30 10:00:00 UTC 2024
#    Finished: SUCCESS

# View environment variables available:
# BUILD_NUMBER - The current build number
# BUILD_ID - The current build ID (same as BUILD_NUMBER for new jobs)
# BUILD_DISPLAY_NAME - Display name (#1)
# JOB_NAME - Name of the job
# WORKSPACE - Path to workspace directory
# JENKINS_URL - Jenkins server URL
# NODE_NAME - Name of the agent node


Scenario 3: Building a Simple Pipeline Job

Creating a pipeline with Git checkout and build steps

sequenceDiagram
    participant User as Developer
    participant Git as Git Repository
    participant Jenkins as Jenkins Server
    participant Job as Pipeline Job
    participant Executor as Jenkins Executor
    participant Node as Build Node

    User->>Jenkins: Click "New Item" → "Pipeline"
    User->>Jenkins: Configure Git repository
    Jenkins->>Git: Store repository URL
    User->>Jenkins: Add pipeline script
    Jenkins->>Job: Save pipeline configuration
    User->>Jenkins: Click "Build Now"
    Job->>Executor: Trigger pipeline
    Executor->>Node: Allocate workspace
    Node->>Git: git clone <repository>
    Git-->>Node: Download source code
    Node->>Node: Execute pipeline stages
    Node->>Node: Build stage: compile code
    Node->>Node: Test stage: run tests
    Node->>Node: Archive stage: save artifacts
    Node-->>Job: Pipeline complete
    Job->>Jenkins: Store logs & artifacts
    Jenkins-->>User: Show pipeline view

Code:

# Create a simple Node.js app to build
mkdir jenkins-pipeline-demo && cd jenkins-pipeline-demo
git init

# Create a simple Node.js app
cat > app.js << 'EOF'
const express = require('express');
const app = express();
app.get('/', (req, res) => res.send('Hello from Jenkins Pipeline!'));
const port = process.env.PORT || 3000;
app.listen(port, () => console.log(`App running on port ${port}`));
EOF

# Create package.json
cat > package.json << 'EOF'
{
  "name": "jenkins-demo",
  "version": "1.0.0",
  "scripts": {
    "start": "node app.js",
    "test": "node test.js"
  },
  "dependencies": {
    "express": "^4.18.0"
  }
}
EOF

# Create simple test
cat > test.js << 'EOF'
const assert = require('assert');
assert.strictEqual(1 + 1, 2);
console.log('✓ Basic test passed');
EOF

# Create Jenkinsfile (declarative pipeline)
cat > Jenkinsfile << 'EOF'
pipeline {
    agent any

    tools {
        nodejs 'NodeJS 18'
    }

    stages {
        stage('Checkout') {
            steps {
                echo 'Checking out source code...'
                checkout scm
            }
        }

        stage('Install Dependencies') {
            steps {
                echo 'Installing Node.js dependencies...'
                sh 'npm install'
            }
        }

        stage('Build') {
            steps {
                echo 'Building application...'
                sh 'echo "Build complete"'
            }
        }

        stage('Test') {
            steps {
                echo 'Running tests...'
                sh 'npm test'
            }
        }

        stage('Archive') {
            steps {
                echo 'Archiving artifacts...'
                archiveArtifacts artifacts: '**/*.js', fingerprint: true
            }
        }
    }

    post {
        always {
            echo 'Cleaning up...'
            cleanWs()
        }
        success {
            echo 'Pipeline succeeded!'
        }
        failure {
            echo 'Pipeline failed!'
        }
    }
}
EOF

# Add and commit to Git
git add .
git commit -m "Initial commit with Jenkinsfile"
git branch -M main

# In Jenkins UI:
# 1. Click "New Item" → "Pipeline" → Name: "node-app-pipeline"
# 2. Check "GitHub project" and enter your repo URL
# 3. In "Pipeline" section:
#    - Definition: Pipeline script from SCM
#    - SCM: Git
#    - Repository URL: <your-git-repo-url>
#    - Branches to build: */main
#    - Script Path: Jenkinsfile (default)
# 4. Save and click "Build Now"

# For local Git testing (if repo is local):
# In pipeline configuration, select "Pipeline script" and paste:
pipeline {
    agent any
    stages {
        stage('Build') {
            steps {
                sh 'echo "Building from local repo"'
            }
        }
    }
}

# After successful build, check:
# - Build artifacts in "Build Artifacts"
# - Test results in "Test Result Trend"
# - Console output for full logs


Scenario 4: Git Webhook-Triggered Build

Automatically building when code is pushed

sequenceDiagram
    participant Dev as Developer
    participant GitHub as GitHub Repository
    participant Webhook as GitHub Webhook
    participant Jenkins as Jenkins Server
    participant Job as Jenkins Job
    participant Build as Build Process

    Dev->>GitHub: git push origin main
    GitHub->>Webhook: Push event detected
    Webhook->>Jenkins: POST /github-webhook/
    Jenkins->>Job: Trigger build
    Job->>Jenkins: Check for changes
    Jenkins->>GitHub: git fetch origin
    GitHub-->>Jenkins: Return new commits
    Jenkins->>Build: Start pipeline
    Build->>Build: Execute stages
    Build->>GitHub: Update commit status (pending)
    Build->>Build: Run tests
    alt Tests pass
        Build->>GitHub: Update status (success)
        GitHub-->>Dev: Show green checkmark
    else Tests fail
        Build->>GitHub: Update status (failure)
        GitHub-->>Dev: Show red X
    end

Code:

# First, configure Jenkins Git plugin:
# 1. Install "GitHub Integration" plugin
# Manage Jenkins → Manage Plugins → Available → Search "GitHub Integration" → Install

# 2. Configure GitHub server
# Manage Jenkins → System → GitHub → Add GitHub Server
# - Name: GitHub
# - API URL: https://api.github.com
# - Credentials: Add GitHub token
#   - Kind: Secret text
#   - Secret: <your-github-token>
#   - ID: github-token

# 3. In your pipeline job:
# Configure → Build Triggers → Select "GitHub hook trigger for GITScm polling"

# 4. Set up GitHub webhook:
# Go to GitHub repository → Settings → Webhooks → Add webhook
# - Payload URL: http://<your-jenkins-url>/github-webhook/
# - Content type: application/json
# - Events: Just the push event
# - Active: checked
# - Click "Add webhook"

# Create a Jenkinsfile with GitHub status updates
cat > Jenkinsfile << 'EOF'
pipeline {
    agent any

    triggers {
        githubPush()
    }

    stages {
        stage('Checkout') {
            steps {
                checkout scm
                script {
                    // Update GitHub commit status
                    githubPRStatusPublisher([
                        statusMsg: [content: 'Jenkins build started'],
                        unstableAs: 'SUCCESS',
                        errorHandlers: [[$class: 'ChangingBuildStatusErrorHandler', result: 'UNSTABLE']]
                    ])
                }
            }
        }

        stage('Build') {
            steps {
                echo "Building commit: ${env.GIT_COMMIT}"
                echo "Branch: ${env.GIT_BRANCH}"
                sh 'make build'
            }
        }

        stage('Test') {
            steps {
                sh 'make test'
            }
            post {
                always {
                    junit '**/target/test-*.xml'
                }
            }
        }
    }

    post {
        success {
            setGitHubPullRequestStatus([
                status: 'SUCCESS',
                message: 'All tests passed!'
            ])
        }
        failure {
            setGitHubPullRequestStatus([
                status: 'FAILURE',
                message: 'Build failed. Check logs.'
            ])
        }
    }
}
EOF

# For GitLab integration:
# In Jenkins job:
# Build Triggers → Select "Build when a change is pushed to GitLab"
# Copy the "GitLab webhook URL"

# In GitLab:
# Settings → Webhooks → Add webhook
# URL: <paste-jenkins-webhook-url>
# Trigger: Push events
# Add webhook

# Test the webhook:
git add .
git commit -m "Test webhook trigger"
git push origin main
# Jenkins should automatically start a build within seconds

# View webhook deliveries:
# GitHub → Webhooks → Recent Deliveries
# Check for green checkmarks (200 response)

# Debug webhook issues:
# Jenkins → Manage Jenkins → System Log
# Add new log recorder for "com.cloudbees.jenkins.GitHubPushTrigger"
# Check logs for webhook reception


Scenario 5: Using Jenkins Credentials

Securely storing and using passwords, tokens, and keys

sequenceDiagram
    participant Dev as Developer
    participant Jenkins as Jenkins Server
    participant Credentials as Jenkins Credentials Store
    participant Pipeline as Jenkins Pipeline
    participant Script as Build Script
    participant API as External API

    Dev->>Jenkins: Manage Jenkins → Credentials
    Jenkins->>Credentials: Add new credential
    Dev->>Jenkins: Select "Secret text" for API token
    Jenkins->>Credentials: Store encrypted
    Credentials->>Jenkins: ID: api-token
    Dev->>Pipeline: Configure pipeline
    Pipeline->>Jenkins: withCredentials([string(credentialsId: 'api-token', variable: 'TOKEN')])
    Jenkins->>Script: Inject TOKEN environment variable
    Script->>API: curl -H "Authorization: Bearer $TOKEN" https://api.example.com
    API-->>Script: Authenticated response
    Script->>Script: Process API data
    Script-->>Pipeline: Build complete
    Note over Credentials: Secrets never in code!

Code:

# Add credentials in Jenkins UI:
# 1. Manage Jenkins → Manage Credentials → (global) → Add Credentials

# For API Token (Secret text):
# - Kind: Secret text
# - Secret: your-api-token-here
# - ID: api-token
# - Description: API Token for external service
# - Click "Create"

# For Username/Password:
# - Kind: Username with password
# - Username: myuser
# - Password: mypassword
# - ID: docker-credentials
# - Description: Docker Hub credentials

# For SSH Private Key:
# - Kind: SSH Username with private key
# - Username: git
# - Private Key: Paste SSH key
# - ID: github-ssh-key

# For Certificate:
# - Kind: Certificate
# - Upload PKCS#12 certificate

# Use credentials in pipeline:
cat > Jenkinsfile.credentials << 'EOF'
pipeline {
    agent any

    stages {
        stage('Deploy with API Token') {
            steps {
                withCredentials([string(credentialsId: 'api-token', variable: 'API_TOKEN')]) {
                    sh '''
                        echo "Deploying using API token..."
                        curl -X POST https://api.example.com/deploy \
                          -H "Authorization: Bearer ${API_TOKEN}" \
                          -d '{"version": "1.0.0"}'
                    '''
                }
            }
        }

        stage('Docker Login') {
            steps {
                withCredentials([usernamePassword(credentialsId: 'docker-credentials', usernameVariable: 'DOCKER_USER', passwordVariable: 'DOCKER_PASS')]) {
                    sh '''
                        echo ${DOCKER_PASS} | docker login -u ${DOCKER_USER} --password-stdin
                        docker build -t ${DOCKER_USER}/myapp:latest .
                        docker push ${DOCKER_USER}/myapp:latest
                    '''
                }
            }
        }

        stage('Git Operations with SSH') {
            steps {
                withCredentials([sshUserPrivateKey(credentialsId: 'github-ssh-key', keyFileVariable: 'SSH_KEY')]) {
                    sh '''
                        export GIT_SSH_COMMAND="ssh -i ${SSH_KEY} -o StrictHostKeyChecking=no"
                        git clone git@github.com:myorg/private-repo.git
                        cd private-repo
                        git tag v1.0.${BUILD_NUMBER}
                        git push origin v1.0.${BUILD_NUMBER}
                    '''
                }
            }
        }

        stage('Use Certificate') {
            steps {
                withCredentials([certificate(credentialsId: 'prod-certificate', keystoreVariable: 'KEYSTORE')]) {
                    sh '''
                        keytool -list -keystore ${KEYSTORE}
                        # Use certificate for signing
                    '''
                }
            }
        }
    }
}
EOF

# Use credentials in freestyle job:
# Build → Add build step → Execute shell:
withCredentials([string(credentialsId: 'aws-access-key', variable: 'AWS_KEY')]) {
  export AWS_ACCESS_KEY_ID=${AWS_KEY}
  aws s3 cp myfile.txt s3://my-bucket/
}

# Alternative: Use credentials binding plugin
# Build Environment → Use secret text(s) or file(s)
# - Bindings: Add → Username and password (separated)
# - Username Variable: USER
# - Password Variable: PASS
# - Credentials: Select your credentials

# For Kubernetes secrets integration:
# 1. Install Kubernetes Credentials Provider plugin
# 2. Manage Jenkins → Manage Credentials → Kubernetes
# 3. Add "Kubernetes Secret" credentials
# 4. Reference in pipeline:
kubernetes secrets:
- name: prod-secrets
  namespace: production
  keys:
    - username
    - password


INTERMEDIATE LEVEL: Advanced Pipelines & Integration

Scenario 6: Parameterized Builds with Choice & Validation

Building with dynamic parameters and input steps

sequenceDiagram
    participant Dev as Developer
    participant Jenkins as Jenkins UI
    participant Params as Build Parameters
    participant Pipeline as Pipeline Job
    participant Input as Input Step
    participant Build as Build Process
    participant Deploy as Deployment Target

    Dev->>Jenkins: Click "Build with Parameters"
    Jenkins->>Params: Show parameter form
    Params->>Jenkins: Environment: [dev, staging, prod]
    Params->>Jenkins: Deploy: checkbox
    Params->>Jenkins: Version: dropdown
    Dev->>Jenkins: Select parameters
    Jenkins->>Pipeline: Start build with parameters
    Pipeline->>Pipeline: Validate parameters
    alt Environment == prod
        Pipeline->>Input: input(message: 'Approve production deploy?')
        Input->>Dev: Wait for approval
        Dev->>Input: Click "Proceed"
    end
    Input-->>Pipeline: Approved
    Pipeline->>Build: Execute stages with parameters
    Build->>Deploy: Deploy to selected environment
    Deploy-->>Build: Deployment complete
    Build->>Dev: Send notification
    Note over Params: Dynamic build configuration

Code:

# Create pipeline with parameters
cat > Jenkinsfile.parameters << 'EOF'
pipeline {
    agent any

    parameters {
        choice(
            name: 'ENVIRONMENT',
            choices: ['dev', 'staging', 'prod'],
            description: 'Select deployment environment'
        )

        string(
            name: 'VERSION',
            defaultValue: '1.0.0',
            description: 'Application version to deploy',
            trim: true
        )

        booleanParam(
            name: 'DEPLOY',
            defaultValue: false,
            description: 'Check to deploy after build'
        )

        text(
            name: 'DEPLOYMENT_NOTES',
            defaultValue: '',
            description: 'Enter deployment notes'
        )

        password(
            name: 'API_KEY',
            defaultValue: '',
            description: 'Optional API key override'
        )

        file(
            name: 'CONFIG_FILE',
            description: 'Upload configuration file'
        )
    }

    stages {
        stage('Validate Parameters') {
            steps {
                script {
                    echo "Environment: ${params.ENVIRONMENT}"
                    echo "Version: ${params.VERSION}"
                    echo "Deploy: ${params.DEPLOY}"
                    echo "Notes: ${params.DEPLOYMENT_NOTES}"

                    // Validate version format
                    if (!params.VERSION.matches(/^\d+\.\d+\.\d+$/)) {
                        error "Invalid version format. Use X.Y.Z"
                    }

                    // Require notes for prod
                    if (params.ENVIRONMENT == 'prod' && params.DEPLOYMENT_NOTES.isEmpty()) {
                        error "Production deployment requires notes"
                    }
                }
            }
        }

        stage('Build') {
            steps {
                sh "make build VERSION=${params.VERSION}"
                archiveArtifacts artifacts: "dist/app-${params.VERSION}.jar", fingerprint: true
            }
        }

        stage('Approval for Production') {
            when {
                beforeAgent true
                allOf {
                    expression { params.ENVIRONMENT == 'prod' }
                    expression { params.DEPLOY }
                }
            }
            steps {
                timeout(time: 1, unit: 'HOURS') {
                    input(
                        id: 'prod-deploy-approval',
                        message: "Deploy to PRODUCTION?",
                        ok: "Deploy",
                        parameters: [
                            text(name: 'APPROVAL_JUSTIFICATION', defaultValue: '', description: 'Justify production deployment')
                        ],
                        submitterParameter: 'APPROVER'
                    )
                }
                script {
                    def approver = input.user
                    echo "Deployment approved by: ${approver}"
                    currentBuild.description = "Approved by ${approver}"
                }
            }
        }

        stage('Deploy') {
            when {
                beforeAgent true
                expression { params.DEPLOY }
            }
            steps {
                script {
                    withCredentials([usernamePassword(credentialsId: "${params.ENVIRONMENT}-deploy", usernameVariable: 'USER', passwordVariable: 'PASS')]) {
                        sh """
                            ./deploy.sh \
                                --environment ${params.ENVIRONMENT} \
                                --version ${params.VERSION} \
                                --username ${USER} \
                                --password ${PASS}
                        """
                    }
                }
            }
        }
    }

    post {
        always {
            emailext (
                to: "${env.CHANGE_AUTHOR_EMAIL ?: 'team@example.com'}",
                subject: "Build ${env.JOB_NAME} - ${currentBuild.currentResult}",
                body: """
                    <h2>Build ${env.BUILD_NUMBER}</h2>
                    <p>Result: ${currentBuild.currentResult}</p>
                    <p>Environment: ${params.ENVIRONMENT}</p>
                    <p>Version: ${params.VERSION}</p>
                    <p>Deploy: ${params.DEPLOY}</p>
                    <p>Notes: ${params.DEPLOYMENT_NOTES}</p>
                    <p>Console: ${env.BUILD_URL}console</p>
                """,
                mimeType: 'text/html'
            )
        }
    }
}
EOF

# Parameter types and usage:
# choice: Single selection dropdown
# booleanParam: Checkbox (true/false)
# string: Text field
# text: Multi-line text area
# password: Masked password field
# file: File upload

# Dynamic parameter generation:
# Install "Active Choices" plugin for dynamic parameters
# Install "Extended Choice Parameter" for multi-select

# Active Choices example:
cat > Jenkinsfile.dynamic << 'EOF'
properties([
    parameters([
        choice(
            name: 'SERVICE',
            choices: ['api', 'web', 'worker'],
            description: 'Service to deploy'
        ),
        [$class: 'DynamicReferenceParameter',
            name: 'VERSION',
            script: [
                $class: 'GroovyScript',
                script: '''
                    if (SERVICE == "api") {
                        return ["v1.0.0", "v1.1.0", "v2.0.0"]
                    } else {
                        return ["v1.0.0", "v1.0.1"]
                    }
                ''',
                fallbackScript: [
                    script: 'return ["unknown"]'
                ]
            ],
            choiceType: 'ET_FORMATTED_HTML'
        ]
    ])
])
EOF

# Conditional stage execution:
# Run stage only for specific parameter
stage('Deploy API') {
    when {
        expression { params.SERVICE == 'api' }
    }
    steps {
        sh './deploy-api.sh'
    }
}

# Parameter validation functions:
def validateParams() {
    if (params.VERSION.isEmpty()) {
        error("Version parameter is required")
    }
    if (params.DEPLOY && params.ENVIRONMENT == 'prod' && !params.DEPLOYMENT_NOTES) {
        error("Production deployment requires notes")
    }
}

// Call in first stage
stage('Validate') {
    steps {
        script { validateParams() }
    }
}

# Artifacts from file parameter:
stage('Process Config') {
    steps {
        script {
            if (params.CONFIG_FILE) {
                sh '''
                    mkdir -p config
                    cp $CONFIG_FILE config/app.config
                    chmod 600 config/app.config
                '''
            }
        }
    }
}

# Use password parameter safely:
stage('Use API Key') {
    steps {
        withCredentials([string(credentialsId: 'api-service-key', variable: 'SERVICE_KEY')]) {
            sh '''
                export API_KEY=${API_KEY_OVERRIDE:-$SERVICE_KEY}
                ./scripts/api-call.sh
            '''
        }
    }
}


Scenario 7: Multi-Stage Docker Build Pipeline

Building, testing, and pushing Docker images

sequenceDiagram
    participant Dev as Developer
    participant Git as Git Repository
    participant Jenkins as Jenkins Server
    participant Docker as Docker Daemon
    participant Registry as Docker Registry
    participant K8s as Kubernetes Cluster

    Dev->>Git: git push with Dockerfile
    Git->>Jenkins: Webhook triggers build
    Jenkins->>Jenkins: Checkout code
    Jenkins->>Jenkins: Load Docker credentials
    Jenkins->>Docker: docker build -t myapp:${BUILD_NUMBER} .
    Docker->>Docker: Build image layers
    Docker-->>Jenkins: Image built

    Jenkins->>Docker: docker run myapp:${BUILD_NUMBER} npm test
    Docker->>Docker: Run tests in container
    Docker-->>Jenkins: Tests passed

    Jenkins->>Registry: docker login -u USER -p PASS
    Registry-->>Jenkins: Login successful
    Jenkins->>Registry: docker push myapp:${BUILD_NUMBER}
    Registry-->>Jenkins: Image pushed

    Jenkins->>K8s: kubectl set image deployment/myapp app=myapp:${BUILD_NUMBER}
    K8s->>K8s: Rolling update
    K8s-->>Jenkins: Deployment successful

    Jenkins->>Dev: Send notification "Deploy complete"
    Note over Docker: Full container lifecycle

Code:

# Create Dockerfile
cat > Dockerfile << 'EOF'
FROM node:18-alpine AS builder

WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

COPY . .
RUN npm run build

FROM node:18-alpine

WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./

EXPOSE 3000
CMD ["node", "dist/server.js"]
EOF

# Create Jenkinsfile for Docker pipeline
cat > Jenkinsfile.docker << 'EOF'
pipeline {
    agent none

    environment {
        // Define registry and image name
        DOCKER_REGISTRY = 'docker.io'
        IMAGE_NAME = 'myorg/myapp'
        // Use BUILD_NUMBER for versioning
        IMAGE_TAG = "${BUILD_NUMBER}"
        // Full image reference
        FULL_IMAGE = "${DOCKER_REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}"
        LATEST_IMAGE = "${DOCKER_REGISTRY}/${IMAGE_NAME}:latest"
    }

    stages {
        stage('Checkout') {
            agent any
            steps {
                checkout scm
            }
        }

        stage('Docker Build') {
            agent any
            steps {
                script {
                    // Build Docker image
                    docker.build("${IMAGE_NAME}:${IMAGE_TAG}", "-f Dockerfile .")
                }
            }
        }

        stage('Test in Docker') {
            agent any
            steps {
                script {
                    // Run tests in container
                    docker.image("${IMAGE_NAME}:${IMAGE_TAG}").inside {
                        sh 'npm test'
                        sh 'npm run integration-test'
                    }
                }
            }
        }

        stage('Security Scan') {
            agent any
            steps {
                script {
                    // Scan image for vulnerabilities
                    sh """
                        trivy image --severity HIGH,CRITICAL \
                          --exit-code 1 ${IMAGE_NAME}:${IMAGE_TAG} || \
                          { echo "Vulnerabilities found!"; exit 1; }
                    """
                }
            }
        }

        stage('Push to Registry') {
            agent any
            steps {
                script {
                    // Login and push
                    docker.withRegistry("https://${DOCKER_REGISTRY}", 'dockerhub-credentials') {
                        // Push versioned tag
                        sh "docker push ${FULL_IMAGE}"

                        // Tag and push latest
                        sh "docker tag ${FULL_IMAGE} ${LATEST_IMAGE}"
                        sh "docker push ${LATEST_IMAGE}"
                    }
                }
            }
        }

        stage('Deploy to Staging') {
            agent any
            when {
                branch 'main'
            }
            steps {
                script {
                    // Deploy to staging Kubernetes
                    withKubeConfig([credentialsId: 'kube-config']) {
                        sh """
                            kubectl set image deployment/myapp \
                              app=${FULL_IMAGE} \
                              -n staging

                            kubectl rollout status deployment/myapp -n staging
                        """
                    }
                }
            }
        }

        stage('Deploy to Production') {
            agent any
            when {
                branch 'main'
            }
            input {
                message "Deploy to production?"
                ok "Deploy"
            }
            steps {
                script {
                    withKubeConfig([credentialsId: 'kube-config']) {
                        sh """
                            kubectl set image deployment/myapp \
                              app=${FULL_IMAGE} \
                              -n production

                            kubectl rollout status deployment/myapp -n production
                        """
                    }
                }
            }
        }
    }

    post {
        always {
            script {
                // Clean up Docker images
                sh """
                    docker rmi ${FULL_IMAGE} || true
                    docker rmi ${LATEST_IMAGE} || true
                """

                // Clean workspace
                cleanWs()
            }
        }
        success {
            script {
                // Post build success notification
                currentBuild.description = "Image: ${FULL_IMAGE}"
            }
        }
    }
}
EOF

# Multi-platform build with buildx:
cat > Jenkinsfile.buildx << 'EOF'
pipeline {
    agent {
        label 'docker-buildx'
    }

    stages {
        stage('Setup') {
            steps {
                script {
                    // Create and use buildx builder
                    sh 'docker buildx create --use --name multiarch-builder'
                }
            }
        }

        stage('Build and Push Multi-Arch') {
            steps {
                script {
                    withDockerRegistry([credentialsId: 'dockerhub', url: 'https://index.docker.io/v1/']) {
                        sh """
                            docker buildx build \
                              --platform linux/amd64,linux/arm64 \
                              --tag myorg/myapp:${BUILD_NUMBER} \
                              --tag myorg/myapp:latest \
                              --push \
                              --file Dockerfile \
                              .
                        """
                    }
                }
            }
        }
    }
}
EOF

# Docker-in-Docker setup for agents:
# In Jenkins agent Dockerfile:
cat > Dockerfile.jenkins-agent << 'EOF'
FROM jenkins/inbound-agent:latest

# Install Docker CLI
USER root
RUN apt-get update && \
    apt-get install -y \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg2 \
    software-properties-common && \
    curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - && \
    add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" && \
    apt-get update && \
    apt-get install -y docker-ce-cli

USER jenkins
EOF

# Build and push Jenkins agent:
docker build -t myorg/jenkins-agent:latest -f Dockerfile.jenkins-agent .
docker push myorg/jenkins-agent:latest

# Configure agent in Jenkins:
# Manage Jenkins → Nodes → New Node → Name: docker-agent
# Launch method: Launch agent via SSH
# Host: 192.168.1.100
# Credentials: ssh-key
# Label: docker-buildx

# Cache Docker layers in pipeline:
stage('Docker Build with Cache') {
    steps {
        script {
            // Pull previous image for cache
            sh """
                docker pull ${DOCKER_REGISTRY}/${IMAGE_NAME}:latest || true
                docker build \
                  --cache-from ${DOCKER_REGISTRY}/${IMAGE_NAME}:latest \
                  -t ${IMAGE_NAME}:${BUILD_NUMBER} \
                  -t ${DOCKER_REGISTRY}/${IMAGE_NAME}:${BUILD_NUMBER} \
                  .
            """
        }
    }
}

# Use Docker Compose in pipeline:
cat > docker-compose.test.yml << 'EOF'
version: '3.8'
services:
  app:
    build: .
    environment:
      - NODE_ENV=test
  database:
    image: postgres:15
    environment:
      - POSTGRES_PASSWORD=test
EOF

stage('Integration Tests') {
    steps {
        sh 'docker-compose -f docker-compose.test.yml run --rm app npm run test:integration'
    }
}


Scenario 8: Parallel Pipeline Execution

Running multiple stages simultaneously for faster builds

sequenceDiagram
    participant Jenkins as Jenkins Controller
    participants Stage1 as Checkout Stage
    participants Parallel as Parallel Stages
    participants Build as Build (Backend)
    participants TestFront as Test Frontend
    participants TestAPI as Test API
    participants Lint as Lint Code
    participants Deploy as Deploy Stage

    Jenkins->>Stage1: Sequential checkout
    Stage1->>Parallel: Complete
    Parallel->>Build: Run in parallel
    Parallel->>TestFront: Run in parallel
    Parallel->>TestAPI: Run in parallel
    Parallel->>Lint: Run in parallel
    Build->>Deploy: Wait for all to complete
    TestFront->>Deploy: Wait for all to complete
    TestAPI->>Deploy: Wait for all to complete
    Lint->>Deploy: Wait for all to complete
    Deploy->>Jenkins: Deploy to staging
    Note over Parallel: 3x faster execution

Code:

# Create parallel pipeline
cat > Jenkinsfile.parallel << 'EOF'
pipeline {
    agent any

    stages {
        stage('Prepare') {
            steps {
                sh 'echo "Setting up build environment..."'
                checkout scm
            }
        }

        stage('Parallel Tests') {
            parallel {
                stage('Backend Tests') {
                    agent {
                        docker {
                            image 'maven:3.8-openjdk-17'
                            args '-v $HOME/.m2:/root/.m2'
                        }
                    }
                    steps {
                        dir('backend') {
                            sh 'mvn clean test'
                            junit 'target/surefire-reports/*.xml'
                            publishHTML([
                                reportDir: 'target/site/jacoco',
                                reportFiles: 'index.html',
                                reportName: 'Backend Coverage'
                            ])
                        }
                    }
                }

                stage('Frontend Tests') {
                    agent {
                        docker {
                            image 'node:18-alpine'
                        }
                    }
                    steps {
                        dir('frontend') {
                            sh 'npm ci'
                            sh 'npm test -- --coverage'
                            junit 'test-results/jest.xml'
                            publishHTML([
                                reportDir: 'coverage/lcov-report',
                                reportFiles: 'index.html',
                                reportName: 'Frontend Coverage'
                            ])
                        }
                    }
                }

                stage('API Integration Tests') {
                    steps {
                        script {
                            // Start test environment
                            sh 'docker-compose -f test-compose.yml up -d'

                            // Wait for services
                            sh './scripts/wait-for-services.sh http://localhost:8080/health'

                            // Run tests
                            sh 'newman run api-tests.postman_collection.json'

                            // Cleanup
                            sh 'docker-compose -f test-compose.yml down'
                        }
                    }
                }

                stage('Security Scan') {
                    steps {
                        parallel(
                            "Dependency Check": {
                                sh 'mvn dependency-check:check'
                                dependencyCheckPublisher pattern: 'target/dependency-check-report.xml'
                            },
                            "Container Scan": {
                                sh 'trivy image --format template --template "@github" -o trivy-report.md myapp:test'
                                archiveArtifacts artifacts: 'trivy-report.md'
                            },
                            "Code Quality": {
                                sh 'sonar-scanner -Dsonar.projectKey=myapp'
                            }
                        )
                    }
                }
            }
        }

        stage('Build Artifacts') {
            steps {
                parallel(
                    "Backend": {
                        dir('backend') {
                            sh 'mvn clean package -DskipTests'
                            archiveArtifacts artifacts: 'target/*.jar', fingerprint: true
                        }
                    },
                    "Frontend": {
                        dir('frontend') {
                            sh 'npm run build'
                            archiveArtifacts artifacts: 'dist/**/*', fingerprint: true
                        }
                    }
                )
            }
        }

        stage('Deploy') {
            when {
                branch 'main'
            }
            steps {
                sh './deploy.sh'
            }
        }
    }

    post {
        always {
            // Collect all reports
            publishTestResults testResultsPattern: '**/test-*.xml'

            // Cleanup parallel workspaces
            cleanWs()
        }
    }
}
EOF

# Matrix builds for cross-platform testing:
cat > Jenkinsfile.matrix << 'EOF'
pipeline {
    agent none

    stages {
        stage('Matrix Build') {
            matrix {
                axes {
                    axis {
                        name 'OS'
                        values 'linux', 'windows', 'macos'
                    }
                    axis {
                        name 'NODE_VERSION'
                        values '16', '18', '20'
                    }
                    axis {
                        name 'DATABASE'
                        values 'postgres', 'mysql', 'mongodb'
                    }
                }

                stages {
                    stage('Test') {
                        agent {
                            label "${OS}-agent"
                        }
                        steps {
                            script {
                                def nodeInstall = tool name: "NodeJS-${NODE_VERSION}", type: 'nodejs'
                                withEnv(["PATH+NODE=${nodeInstall}/bin"]) {
                                    sh """
                                        npm install
                                        npm test -- --database=${DATABASE}
                                    """
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
EOF

# Parallel deployment to multiple environments:
stage('Parallel Deploy') {
    parallel {
        stage('Deploy US-East') {
            steps {
                deployToRegion('us-east-1')
            }
        }
        stage('Deploy EU-West') {
            steps {
                deployToRegion('eu-west-1')
            }
        }
        stage('Deploy Asia-Pacific') {
            steps {
                deployToRegion('ap-southeast-1')
            }
        }
    }
}

def deployToRegion(region) {
    withAWS(region: region, credentials: 'aws-deploy') {
        sh """
            aws eks update-kubeconfig --region ${region} --name my-cluster
            kubectl set image deployment/myapp app=myapp:${BUILD_NUMBER}
        """
    }
}

# Fail fast configuration:
stage('Critical Tests') {
    failFast true  // All parallel stages fail if one fails
    parallel {
        stage('Unit Tests') { steps { sh 'npm test' } }
        stage('Lint') { steps { sh 'npm run lint' } }
        stage('Type Check') { steps { sh 'npm run type-check' } }
    }
}

# Conditional parallel execution:
stage('Optional Parallel') {
    script {
        def stages = [:]

        if (shouldRunBackendTests()) {
            stages['Backend'] = {
                sh 'mvn test'
            }
        }

        if (shouldRunFrontendTests()) {
            stages['Frontend'] = {
                sh 'npm test'
            }
        }

        if (stages.size() > 0) {
            parallel stages
        }
    }
}


Scenario 9: Jenkins Shared Libraries

Reusing pipeline code across multiple projects

sequenceDiagram
    participant Dev1 as Team A Developer
    participant Dev2 as Team B Developer
    participant Git as Git Repository
    participant Library as Jenkins Shared Library
    participant Jenkins as Jenkins Server
    participant Job1 as Team A Job
    participant Job2 as Team B Job
    participant Steps as Shared Steps

    Dev1->>Git: Create shared library repo
    Git->>Library: vars/deploy.groovy
    Library->>Library: vars/notify.groovy

    Dev1->>Jenkins: Configure shared library
    Jenkins->>Library: Load library from Git

    Dev1->>Job1: @Library('my-shared-lib') _
    Job1->>Steps: deploy.k8s(image: 'myapp:v1')
    Steps->>K8s: Deploy to Kubernetes

    Dev2->>Job2: @Library('my-shared-lib') _
    Job2->>Steps: deploy.aws(image: 'myapp:v1')
    Steps->>AWS: Deploy to ECS

    Job1->>Steps: notify.slack(status: 'success')
    Steps->>Slack: Send notifications

    Note over Library: Centralized pipeline logic

Code:

# 1. Create shared library Git repository:
mkdir jenkins-shared-library && cd jenkins-shared-library
git init

# Directory structure:
mkdir -p vars src

# Create a shared step for deployment:
cat > vars/deploy.groovy << 'EOF'
#!/usr/bin/env groovy

def k8s(Map config) {
    // Validate required parameters
    if (!config.image) {
        error "deploy.k8s() requires 'image' parameter"
    }

    def namespace = config.namespace ?: 'default'
    def deployment = config.deployment ?: env.JOB_NAME.toLowerCase().replace('/', '-')
    def containerName = config.container ?: 'app'

    echo "Deploying ${config.image} to Kubernetes namespace ${namespace}"

    // Use credentials for kubeconfig
    withKubeConfig([credentialsId: config.credentialsId ?: 'kubeconfig']) {
        sh """
            kubectl set image deployment/${deployment} \
              ${containerName}=${config.image} \
              -n ${namespace}

            kubectl rollout status deployment/${deployment} -n ${namespace}
        """
    }
}

def aws(Map config) {
    def service = config.service ?: env.JOB_NAME
    def cluster = config.cluster ?: 'default'
    def region = config.region ?: 'us-east-1'

    withAWS(region: region, credentials: config.credentialsId ?: 'aws-deploy') {
        sh """
            aws ecs update-service \
              --cluster ${cluster} \
              --service ${service} \
              --force-new-deployment \
              --region ${region}
        """
    }
}

def docker(Map config) {
    // Pull and run container
    sh "docker pull ${config.image}"
    sh "docker run -d --name ${config.name} ${config.image}"
}
EOF

# Create a notification step:
cat > vars/notify.groovy << 'EOF'
#!/usr/bin/env groovy

def slack(Map config) {
    def channel = config.channel ?: '#builds'
    def status = config.status ?: currentBuild.result ?: 'SUCCESS'

    def color = status == 'SUCCESS' ? 'good' : status == 'UNSTABLE' ? 'warning' : 'danger'
    def message = config.message ?: "Build ${env.JOB_NAME} #${env.BUILD_NUMBER} - ${status}"

    slackSend(
        channel: channel,
        color: color,
        message: message,
        tokenCredentialId: config.credentialsId ?: 'slack-token'
    )
}

def email(Map config) {
    def recipients = config.to ?: emailextrecipients([
        [$class: 'CulpritsRecipientProvider'],
        [$class: 'RequesterRecipientProvider']
    ])

    emailext(
        to: recipients,
        subject: config.subject ?: "Build ${env.JOB_NAME} - ${currentBuild.result}",
        body: config.body ?: readFile('jenkins/email-template.groovy'),
        mimeType: 'text/html'
    )
}

def teams(Map config) {
    office365ConnectorSend(
        message: config.message ?: "Build ${env.JOB_NAME} #${env.BUILD_NUMBER}",
        status: config.status ?: currentBuild.result,
        webhookUrl: config.webhookUrl
    )
}
EOF

# Create a utility class:
mkdir -p src/com/example
cat > src/com/example/Utils.groovy << 'EOF'
package com.example

class Utils {
    static String getVersion() {
        def tag = sh(script: 'git describe --tags --always', returnStdout: true)?.trim()
        return tag ?: '0.0.0-unknown'
    }

    static boolean isReleaseBranch(String branch) {
        return branch ==~ /release\/\d+\.\d+/
    }

    static void withDockerCredentials(Closure body) {
        withCredentials([usernamePassword(
            credentialsId: 'dockerhub',
            usernameVariable: 'DOCKER_USER',
            passwordVariable: 'DOCKER_PASS'
        )]) {
            body.call()
        }
    }
}
EOF

# Create a pipeline template:
cat > vars/standardPipeline.groovy << 'EOF'
#!/usr/bin/env groovy

def call(Map config) {
    pipeline {
        agent any

        triggers {
            cron(config.cron ?: 'H 2 * * *')
        }

        options {
            timeout(time: config.timeoutMinutes ?: 60, unit: 'MINUTES')
            retry(config.retries ?: 2)
        }

        stages {
            stage('Setup') {
                steps {
                    script {
                        config.setup?.call()
                    }
                }
            }

            stage('Build') {
                steps {
                    script {
                        config.build?.call()
                    }
                }
            }

            stage('Test') {
                parallel {
                    stage('Unit') {
                        steps {
                            script { config.unitTest?.call() }
                        }
                    }
                    stage('Integration') {
                        steps {
                            script { config.integrationTest?.call() }
                        }
                    }
                }
            }

            stage('Deploy') {
                when {
                    branch config.deployBranch ?: 'main'
                    expression { config.deploy != false }
                }
                steps {
                    script {
                        deploy.k8s(
                            image: "${config.image}:${Utils.getVersion()}",
                            namespace: config.namespace ?: 'default'
                        )
                    }
                }
            }
        }

        post {
            always {
                script { notify.slack() }
            }
        }
    }
}
EOF

# Add shared library to Jenkins:
# Manage Jenkins → System → Global Pipeline Libraries
# - Name: my-shared-library
# - Default version: main (branch name)
# - Load implicitly: checked (optional)
# - Allow default version to be overridden: checked
# - Include @Library changes in recent changes: checked
# - Retrieval method: Modern SCM
# - Source Code Management: Git
# - Project Repository: https://github.com/myorg/jenkins-shared-library

# Use shared library in pipeline:
cat > Jenkinsfile.using-lib << 'EOF'
@Library('my-shared-library@main') _

// Use standard pipeline template
standardPipeline(
    image: 'myorg/myapp',
    namespace: 'production',
    timeoutMinutes: 30,

    setup: {
        sh 'npm ci'
    },

    build: {
        sh 'npm run build'
        archiveArtifacts artifacts: 'dist/**/*'
    },

    unitTest: {
        sh 'npm test'
        junit 'test-results.xml'
    },

    integrationTest: {
        sh 'npm run test:integration'
    },

    deployBranch: 'release/*'
)

// Or use individual steps directly
@Library('my-shared-library@main') _

pipeline {
    agent any

    stages {
        stage('Build') {
            steps {
                sh 'mvn clean package'
            }
        }

        stage('Deploy') {
            steps {
                script {
                    deploy.k8s(
                        image: "myapp:${BUILD_NUMBER}",
                        namespace: 'staging',
                        credentialsId: 'kube-staging-config'
                    )

                    notify.slack(
                        channel: '#deployments',
                        status: currentBuild.result,
                        message: "Deployed to staging: ${BUILD_NUMBER}"
                    )
                }
            }
        }
    }
}
EOF

# Version your shared library:
# Tag releases in Git:
git tag -a v1.0.0 -m "Initial shared library release"
git push origin v1.0.0

# Use specific version in pipeline:
@Library('my-shared-library@v1.0.0') _

# Test shared library locally:
# Create test pipeline:
cat > test-library.Jenkinsfile << 'EOF'
@Library('my-shared-library@feature-branch') _
pipeline {
    agent any
    stages {
        stage('Test') {
            steps {
                script {
                    deploy.k8s(image: 'test:v1', namespace: 'test')
                    notify.slack(status: 'SUCCESS')
                }
            }
        }
    }
}
EOF

git add .
git commit -m "Add shared library"
git tag v1.1.0
git push origin v1.1.0


Scenario 10: Jenkins with Kubernetes Agents

Dynamically provisioning build agents in Kubernetes

sequenceDiagram
    participant Pipeline as Jenkins Pipeline
    participant Jenkins as Jenkins Controller
    participant K8s as Kubernetes Cluster
    participant Pod as Ephemeral Agent Pod
    participant Container as Build Container
    participant Registry as Docker Registry
    participant App as Application

    Pipeline->>Jenkins: Request agent
    Jenkins->>K8s: Create agent pod YAML
    K8s->>Pod: Launch pod with containers
    Pod->>Container: Start jnlp container
    Pod->>Container: Start build container
    Container->>Jenkins: Connect via JNLP
    Jenkins->>Container: Execute pipeline steps
    Container->>Git: Clone repository
    Container->>Container: Run build
    Container->>Container: Run tests
    Container->>Registry: Build and push image
    Container->>App: Deploy to production
    Jenkins->>K8s: Pod no longer needed
    K8s->>Pod: Delete pod
    Note over Pod: Ephemeral, scalable agents

Code:

# Install Kubernetes plugin in Jenkins:
# Manage Jenkins → Manage Plugins → Available → Search "Kubernetes" → Install

# Configure Kubernetes cloud:
# Manage Jenkins → Manage Nodes and Clouds → Configure Clouds → Add a new cloud → Kubernetes

# Configuration:
- Name: kubernetes
- Kubernetes Namespace: jenkins-agents
- Jenkins URL: http://jenkins.jenkins.svc.cluster.local:8080
- Jenkins Tunnel: jenkins-agent.jenkins.svc.cluster.local:50000
- Kubernetes URL: https://kubernetes.default
- Kubernetes server certificate key: <auto-populated>
- Credentials: Add Jenkins service account token

# Create service account for Jenkins:
cat > jenkins-service-account.yaml << 'EOF'
apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins
  namespace: jenkins
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: jenkins
rules:
- apiGroups: [""]
  resources: ["pods", "pods/exec", "pods/log"]
  verbs: ["create", "delete", "get", "list", "patch", "update"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["get", "list"]
- apiGroups: [""]
  resources: ["persistentvolumeclaims"]
  verbs: ["create", "list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: jenkins
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: jenkins
subjects:
- kind: ServiceAccount
  name: jenkins
  namespace: jenkins
EOF

kubectl apply -f jenkins-service-account.yaml

# Get token for credentials:
SECRET=$(kubectl get serviceAccount jenkins -n jenkins -o jsonpath='{.secrets[0].name}')
TOKEN=$(kubectl get secret $SECRET -n jenkins -o jsonpath='{.data.token}' | base64 --decode)
echo $TOKEN

# Configure pod template:
# Manage Jenkins → Configure Clouds → kubernetes → Pod Template → Add

# Pod Template Configuration:
- Name: jenkins-agent
- Namespace: jenkins-agents
- Labels: jenkins-agent
- Service Account: jenkins
- Usage: Use this node as much as possible

# Add container template:
- Name: jnlp (required)
- Docker image: jenkins/inbound-agent:latest
- Always pull image: true
- Working directory: /home/jenkins/agent

# Add build container:
- Name: build
- Docker image: docker:20.10-dind
- Command to run: dockerd-entrypoint.sh
- Arguments to pass: --storage-driver=overlay2 --insecure-registry=localhost:5000
- Privileged: true
- Always pull image: true

# Or configure via pipeline:
cat > Jenkinsfile.k8s-agent << 'EOF'
pipeline {
    agent {
        kubernetes {
            yaml """
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: jenkins-agent
spec:
  serviceAccountName: jenkins
  containers:
  - name: jnlp
    image: jenkins/inbound-agent:latest
    env:
    - name: JENKINS_URL
      value: "http://jenkins.jenkins.svc.cluster.local:8080"
    - name: JENKINS_TUNNEL
      value: "jenkins-agent.jenkins.svc.cluster.local:50000"
  - name: build
    image: maven:3.8-openjdk-17
    command:
    - sleep
    args:
    - infinity
    workingDir: /home/jenkins/agent
    volumeMounts:
    - name: maven-cache
      mountPath: /root/.m2
  - name: docker
    image: docker:20.10-dind
    securityContext:
      privileged: true
    volumeMounts:
    - name: docker-sock
      mountPath: /var/run/docker.sock
  volumes:
  - name: maven-cache
    persistentVolumeClaim:
      claimName: jenkins-maven-cache
  - name: docker-sock
    hostPath:
      path: /var/run/docker.sock
"""
        }
    }

    stages {
        stage('Build') {
            steps {
                container('build') {
                    sh 'mvn clean compile'
                }
            }
        }

        stage('Test') {
            steps {
                container('build') {
                    sh 'mvn test'
                }
            }
            post {
                always {
                    junit 'target/surefire-reports/*.xml'
                }
            }
        }

        stage('Docker Build') {
            steps {
                container('docker') {
                    sh """
                        docker build -t myapp:${BUILD_NUMBER} .
                        docker tag myapp:${BUILD_NUMBER} myregistry/myapp:${BUILD_NUMBER}
                        docker push myapp:${BUILD_NUMBER}
                    """
                }
            }
        }

        stage('Deploy') {
            steps {
                container('kubectl') {
                    withCredentials([file(credentialsId: 'kubeconfig', variable: 'KUBECONFIG')]) {
                        sh """
                            kubectl set image deployment/myapp app=myregistry/myapp:${BUILD_NUMBER}
                            kubectl rollout status deployment/myapp
                        """
                    }
                }
            }
        }
    }
}
EOF

# Use custom pod templates per stage:
stage('Backend Build') {
    agent {
        kubernetes {
            yaml """
spec:
  containers:
  - name: node
    image: node:18
    command: [sleep]
    args: [infinity]
  """
        }
    }
    steps {
        container('node') {
            sh 'npm ci && npm test'
        }
    }
}

stage('Docker Build') {
    agent {
        kubernetes {
            yaml """
spec:
  containers:
  - name: docker
    image: docker:latest
    command: [sleep]
    args: [infinity]
    securityContext:
      privileged: true
  """
        }
    }
    steps {
        container('docker') {
            sh 'docker build -t myapp .'
        }
    }
}

# Dynamic agent provisioning based on build type:
def getAgentPod(String buildType) {
    if (buildType == 'java') {
        return """
spec:
  containers:
  - name: maven
    image: maven:3.8-openjdk-17
    command: [sleep]
    args: [infinity]
"""
    } else if (buildType == 'node') {
        return """
spec:
  containers:
  - name: node
    image: node:18
    command: [sleep]
    args: [infinity]
"""
    }
}

pipeline {
    agent {
        kubernetes {
            yaml getAgentPod(params.BUILD_TYPE)
        }
    }
    // ...
}

# Set resource limits:
cat > pod-with-resources.yaml << 'EOF'
spec:
  containers:
  - name: build
    image: maven:3.8-openjdk-17
    resources:
      requests:
        cpu: 1000m
        memory: 2Gi
      limits:
        cpu: 2000m
        memory: 4Gi
EOF

# Use spot instances for cost savings:
spec:
  nodeSelector:
    workload: batch
  tolerations:
  - key: "spot"
    operator: "Equal"
    value: "true"
    effect: "NoSchedule"
  containers:
  - name: build
    # ...


ADVANCED LEVEL: Production-Ready Patterns

Scenario 11: Pipeline Visualization with Blue Ocean

Modern UI for pipeline creation and monitoring

sequenceDiagram
    participant Dev as Developer
    participant Classic as Classic Jenkins UI
    participant BlueOcean as Blue Ocean UI
    participant PipelineEditor as Visual Pipeline Editor
    participant Git as Git Repository
    participant Jenkins as Jenkins Core
    participant Stage as Pipeline Stage
    participant Step as Pipeline Step

    Dev->>Classic: Click "Open Blue Ocean"
    Classic->>BlueOcean: Redirect to modern UI
    BlueOcean->>Dev: Show pipeline list

    Dev->>BlueOcean: Click "New Pipeline"
    BlueOcean->>PipelineEditor: Open visual editor
    PipelineEditor->>Dev: Show stage templates

    Dev->>PipelineEditor: Add stage "Build"
    PipelineEditor->>Step: Add shell step
    Step->>Dev: Enter: sh 'make build'

    Dev->>PipelineEditor: Add stage "Test"
    PipelineEditor->>Step: Add test step
    Step->>Dev: Enter: sh 'npm test'

    Dev->>PipelineEditor: Add stage "Deploy"
    PipelineEditor->>Step: Add deploy step
    Step->>Dev: Enter: sh './deploy.sh'

    Dev->>PipelineEditor: Click "Save"
    PipelineEditor->>Git: Commit Jenkinsfile
    Git->>Jenkins: Trigger first build

    Jenkins->>Stage: Build
    Stage->>BlueOcean: Visual progress

    Jenkins->>Stage: Test
    Stage->>BlueOcean: Show test results

    Jenkins->>Stage: Deploy
    Stage->>BlueOcean: Show deployment

    BlueOcean->>Dev: Beautiful pipeline visualization!

Code:

# Install Blue Ocean plugin:
# Manage Jenkins → Manage Plugins → Available → Search "Blue Ocean" → Install

# Access Blue Ocean:
# Click "Open Blue Ocean" on main page
# Or navigate to: http://localhost:8080/blue

# Create pipeline in Blue Ocean:
# 1. Click "New Pipeline"
# 2. Select Git provider (GitHub, Bitbucket, Git)
# 3. Enter repository URL
# 4. Blue Ocean automatically detects Jenkinsfile
# 5. If no Jenkinsfile, click "Create Pipeline"
# 6. Visual editor opens

# Visual Pipeline Editor:
# Click "+" to add stage
# Click stage → "+" to add step
# Step types available:
# - Shell Script: sh '...'
# - Print Message: echo '...'
# - Build: Other job
# - Deploy: Deploy to environment
# - Test: JUnit, testNG
# - Artifact: Archive artifacts
# - Wait: Input/sleep

# Example pipeline created visually:
pipeline {
    agent any

    stages {
        stage('Build') {
            steps {
                sh 'make build'
            }
        }
        stage('Test') {
            parallel {
                stage('Unit') {
                    steps {
                        sh 'npm test'
                    }
                }
                stage('Integration') {
                    steps {
                        sh 'npm run test:integration'
                    }
                }
            }
        }
        stage('Deploy') {
            steps {
                sh './deploy.sh'
            }
        }
    }
}

# Blue Ocean URL patterns:
# /blue/organizations/jenkins/ - List all pipelines
# /blue/organizations/jenkins/my-pipeline/ - Pipeline view
# /blue/organizations/jenkins/my-pipeline/activity/ - Build history
# /blue/organizations/jenkins/my-pipeline/branches/ - Branch view
# /blue/organizations/jenkins/my-pipeline/pr/ - Pull requests
# /blue/organizations/jenkins/my-pipeline/detail/master/1/pipeline/ - Run details

# Customize Blue Ocean view:
# Add stage display names:
stage('Build') {
    steps {
        script {
            // Custom stage label
            currentBuild.displayName = "Build #${env.BUILD_NUMBER} - ${params.ENVIRONMENT}"
        }
    }
}

# Add stage result icons:
stage('Test') {
    steps {
        catchError(buildResult: 'UNSTABLE', stageResult: 'UNSTABLE') {
            sh 'npm test'
        }
    }
}

# View test results in Blue Ocean:
# Blue Ocean automatically shows JUnit, Mocha, Karma results
# Install "mocha-junit-reporter" for Mocha tests:
npm install --save-dev mocha-junit-reporter

# Update test script:
"test": "mocha --reporter mocha-junit-reporter --reporter-options mochaFile=./test-results.xml"

# Blue Ocean will display:
# - Test count (passed/failed/skipped)
# - Test duration
# - Detailed failure information
# - Test trends

# Visualize parallel stages:
stage('Parallel Work') {
    parallel {
        stage('Backend') { steps { sh 'sleep 10' } }
        stage('Frontend') { steps { sh 'sleep 15' } }
        stage('API') { steps { sh 'sleep 5' } }
    }
}
# Blue Ocean shows parallel bars progressing simultaneously

# Interact with running builds:
# Click running build → Pause button to pause
# Click "Resume" to continue
# Click "Stop" to abort

# Replay with Blue Ocean:
# Click build → Replay → Edit Jenkinsfile
# Blue Ocean opens editor
# Make changes and run
# Changes are not committed to Git

# Input step visualization:
stage('Approve') {
    steps {
        input(
            message: 'Approve deployment?',
            ok: 'Deploy',
            parameters: [
                text(name: 'COMMENTS', defaultValue: '', description: 'Approval comments')
            ]
        )
    }
}
# Blue Ocean shows big approve/reject buttons

# Integration with GitHub:
# Blue Ocean shows GitHub branch and PR information
# Shows commit messages and authors
# Links to GitHub diffs

# Customize with Blue Ocean CSS:
# Manage Jenkins → Configure System → Blue Ocean → Custom CSS
# Add custom colors, fonts, spacing

# Embed Blue Ocean view:
# Use iframe to embed in other dashboards:
<iframe src="http://jenkins.example.com/blue/organizations/jenkins/my-pipeline/activity/" width="100%" height="800px"></iframe>

# Blue Ocean REST API:
# Get pipeline runs:
curl -u user:token http://localhost:8080/blue/rest/organizations/jenkins/pipelines/my-pipeline/runs/

# Get specific run details:
curl -u user:token http://localhost:8080/blue/rest/organizations/jenkins/pipelines/my-pipeline/runs/1/

# Get nodes (stages):
curl -u user:token http://localhost:8080/blue/rest/organizations/jenkins/pipelines/my-pipeline/runs/1/nodes/

# Enable Blue Ocean by default:
# Manage Jenkins → Configure System → Blue Ocean → Select "Default view"


Scenario 12: Jenkins Security & RBAC

Securing Jenkins with authentication and authorization

sequenceDiagram
    participant User as Developer
    participant Anonymous as Anonymous User
    participant Auth as Authentication Provider
    participant Jenkins as Jenkins Server
    participant Matrix as Matrix Authorization
    participant AdminRole as Admin Role
    participant DevRole as Developer Role
    participant ViewerRole as Viewer Role
    participant Job as Jenkins Job

    Anonymous->>Jenkins: Access Jenkins URL
    Jenkins->>Auth: Redirect to login
    Auth->>User: Show login form
    User->>Auth: Enter credentials
    Auth->>Auth: Validate credentials
    Auth-->>Jenkins: User authenticated
    Jenkins->>Matrix: Check permissions

    Matrix->>AdminRole: admin: Overall/Administer
    Matrix->>DevRole: developer: Job/Build, Job/Configure
    Matrix->>ViewerRole: viewer: Job/Read, View/Read

    User->>Matrix: "Can I create job?"
    Matrix->>AdminRole: Yes
    Matrix->>DevRole: No

    User->>Job: Attempt to build
    Matrix->>Job: Check build permission
    Job-->>User: Access granted/denied
    Note over Matrix: Role-based access control

Code:

# Configure security:
# Manage Jenkins → Configure Global Security

# Enable security (check "Enable Security")
# Security Realm: Jenkins' own user database
# Authorization: Matrix-based security

# Add users and permissions:
# Authentication → Security Realm:
# - Jenkins’ own user database (default)
# - LDAP (for enterprise)
# - Unix user/group database
# - SAML 2.0 (for SSO)

# Add admin user:
# Manage Jenkins → Manage Users → Create User
# Username: admin
# Password: secure-admin-password
# Full name: Jenkins Administrator
# Email: admin@example.com

# Configure Matrix Authorization:
# Manage Jenkins → Configure Global Security → Authorization
# Add admin to matrix:
# - User/group: admin
# - Overall: Administer (check all boxes)

# Create developer role:
# Install "Role-based Authorization Strategy" plugin

# Manage Jenkins → Manage and Assign Roles → Manage Roles
# Global roles → Add → developer
# - Overall → Read: checked
# - Job → Build, Cancel, Configure, Create, Delete, Discover, Read, Workspace: checked
# - Run → Delete, Replay, Update: checked
# - View → Configure, Create, Delete, Read: checked

# Item roles → Add → project-developer
# - Pattern: myapp-.*  (regex for job names)
# - Job → Build, Configure, Read, Workspace: checked

# Assign roles:
# Assign Roles → Global roles → Add user: alice
# - Assign: developer

# Assign Roles → Item roles → Add user: alice
# - Assign: project-developer

# Alternative: Project-based Matrix Authorization
# In job configuration → Enable project-based security
# Add specific users/groups for that job

# Configure LDAP (enterprise example):
Manage Jenkins  Configure Global Security  Security Realm  LDAP
- Server: ldap://ldap.example.com:389
- Root DN: dc=example,dc=com
- User search base: ou=users
- User search filter: uid={0}
- Group search base: ou=groups
- Manager DN: cn=admin,dc=example,dc=com
- Manager Password: ****

# Configure SAML SSO:
# Install "SAML" plugin
Manage Jenkins  Configure Global Security  SAML 2.0
- IdP Metadata: <upload metadata XML>
- Display Name Attribute: displayname
- Username Attribute: username
- Email Attribute: email
- Groups Attribute: groups

# Use script console for bulk user management:
# Manage Jenkins → Script Console

# Create multiple users:
'''
for (user in ['alice', 'bob', 'charlie']) {
    def hudsonRealm = new HudsonPrivateSecurityRealm(false)
    hudsonRealm.createAccount(user, "Password123!")
    Jenkins.instance.securityRealm = hudsonRealm
}
Jenkins.instance.save()
'''

# Audit logging:
# Install "Audit Trail" plugin
Manage Jenkins  Configure System  Audit Trail
- Loggers: Jenkins.security.*
- Log file: /var/log/jenkins/audit.log
- Include build causes: checked

# Enable CSRF protection:
Manage Jenkins  Configure Global Security  CSRF Protection
- Enable proxy compatibility: checked
- Default Crumb Issuer: checked

# API token management:
# User → Configure → API Token → Add new Token
# Name: ci-token
# Generate → Copy token

# Use token in API calls:
curl -u alice:api-token http://localhost:8080/api/json

# SSH key authentication:
# User → Configure → SSH Public Keys → Add
# Paste SSH public key

# Script for creating users from file:
'''
@Library('shared-lib@main') _

def createUsersFromCSV() {
    def csv = readFile('users.csv')
    csv.split('\n').each { line ->
        def parts = line.split(',')
        def username = parts[0]
        def email = parts[1]
        def role = parts[2]

        // Create user
        def hudsonRealm = Jenkins.instance.securityRealm
        hudsonRealm.createAccount(username, "ChangeMe123!")

        // Assign role
        def strategy = Jenkins.instance.authorizationStrategy
        strategy.add(RoleBasedAuthorizationStrategy.GLOBAL, username, role)
    }
}

createUsersFromCSV()
'''

# users.csv:
alice,alice@example.com,developer
bob,bob@example.com,developer
charlie,charlie@example.com,viewer

# Disable CLI over remote:
# Manage Jenkins → Configure Global Security → Agent → TCP port for agents: Disable

# Secure Jenkins behind reverse proxy:
# nginx configuration:
cat > /etc/nginx/conf.d/jenkins.conf << 'EOF'
server {
    listen 443 ssl;
    server_name jenkins.example.com;

    ssl_certificate /etc/ssl/certs/jenkins.crt;
    ssl_certificate_key /etc/ssl/private/jenkins.key;

    location / {
        proxy_pass http://localhost:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}
EOF

# Configure Jenkins URL:
# Manage Jenkins → Configure System → Jenkins Location
# Jenkins URL: https://jenkins.example.com

# Secure with firewall:
sudo ufw allow 443/tcp
sudo ufw deny 8080/tcp

# Enable login sessions:
# Manage Jenkins → Configure Global Security → Session Management
# Enable session fixation protection
# Enable remember me (optional)

# Password policy:
# Install "Password Policy" plugin
# Manage Jenkins → Configure System → Password Policy
- Minimum length: 12
- Require uppercase: 2
- Require lowercase: 2
- Require digits: 2
- Require special characters: 1
- Expiration days: 90

# Disable unused protocols:
# Manage Jenkins → Configure Global Security → Agent
# JNLP protocols: Disable


Scenario 13: Distributed Builds with Multiple Agents

Scaling Jenkins with master-agent architecture

sequenceDiagram
    participant Master as Jenkins Master
    participant Agent1 as Agent 1 (Linux)
    participant Agent2 as Agent 2 (Windows)
    participant Agent3 as Agent 3 (macOS)
    participant Job1 as Build Job 1
    participant Job2 as Build Job 2
    participant Job3 as Build Job 3
    participant Registry as Docker Registry

    Master->>Agent1: Connect via JNLP
    Master->>Agent2: Connect via JNLP
    Master->>Agent3: Connect via SSH

    Job1->>Master: Request executor
    Master->>Agent1: Schedule job (label: linux)
    Agent1->>Job1: Execute build
    Job1->>Agent1: Build artifact
    Agent1->>Registry: Push artifact

    Job2->>Master: Request executor
    Master->>Agent2: Schedule job (label: windows)
    Agent2->>Job2: Execute build

    Job3->>Master: Request executor
    Master->>Agent3: Schedule job (label: macos)
    Agent3->>Job3: Execute build

    Note over Agent1,Agent3: Parallel execution, resource optimization

Code:

# Set up Jenkins agent on Linux:
# 1. Create agent directory
mkdir /opt/jenkins-agent

# 2. Download agent JAR
wget http://jenkins-master:8080/jnlpJars/agent.jar -O /opt/jenkins-agent/agent.jar

# 3. Create service user
useradd --system --home /opt/jenkins-agent jenkins-agent

# 4. Create systemd service
cat > /etc/systemd/system/jenkins-agent.service << 'EOF'
[Unit]
Description=Jenkins Agent
After=network.target

[Service]
User=jenkins-agent
ExecStart=/usr/bin/java -jar /opt/jenkins-agent/agent.jar -jnlpUrl http://jenkins-master:8080/computer/linux-agent/jenkins-agent.jnlp -secret <secret-key>
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF

# 5. Start agent
systemctl daemon-reload
systemctl enable --now jenkins-agent

# Set up Windows agent:
# 1. Download agent.jar from Jenkins master
# 2. Create C:\Jenkins\Agent directory
# 3. Create startup script launch.bat:
java -jar C:\Jenkins\Agent\agent.jar -jnlpUrl http://jenkins-master:8080/computer/windows-agent/jenkins-agent.jnlp -workDir "C:\Jenkins\Agent\Workspace" -secret <secret-key>

# 4. Create scheduled task to run on startup
schtasks /create /tn "JenkinsAgent" /tr "C:\Jenkins\Agent\launch.bat" /sc onstart /ru SYSTEM

# Configure agent in Jenkins:
# Manage Jenkins → Nodes → New Node
# - Node name: linux-agent
# - Permanent Agent
# - # of executors: 4
# - Remote root directory: /opt/jenkins-agent/workspace
# - Labels: linux docker maven
# - Launch method: Launch agent via SSH
# - Host: 192.168.1.101
# - Credentials: ssh-key

# Configure agent via API:
# Create node config XML
cat > agent-config.xml << 'EOF'
<slave>
  <name>linux-agent-2</name>
  <description>Linux build agent</description>
  <remoteFS>/opt/jenkins-agent</remoteFS>
  <numExecutors>4</numExecutors>
  <mode>NORMAL</mode>
  <retentionStrategy class="hudson.slaves.RetentionStrategy$Always"/>
  <launcher class="hudson.plugins.sshslaves.SSHLauncher" plugin="ssh-slaves@1.31.2">
    <host>192.168.1.102</host>
    <port>22</port>
    <credentialsId>ssh-key</credentialsId>
    <javaPath>/usr/bin/java</javaPath>
  </launcher>
  <label>linux docker</label>
  <nodeProperties/>
</slave>
EOF

# POST to Jenkins API
curl -X POST http://jenkins:8080/computer/doCreateItem \
  --data-binary @agent-config.xml \
  --user admin:token \
  -H "Content-Type: text/xml"

# Agent monitoring:
# Install "Monitoring" plugin
# Shows CPU, memory, disk usage per agent

# Set agent availability:
# Manage Jenkins → Nodes → <Agent> → Configure
# - Usage: Leave this machine for tied jobs only
# - Labels: specific-label

# Use specific agents in pipeline:
pipeline {
    agent {
        label 'linux && docker'
    }
    stages {
        stage('Build') {
            agent {
                label 'windows'
            }
            steps {
                bat 'msbuild.exe myapp.sln'
            }
        }
    }
}

# Cloud agent provisioning:
# Install "EC2 Fleet" plugin for AWS
# Install "Azure VM Agents" plugin for Azure
# Install "Google Compute Engine" plugin for GCP

# EC2 Fleet configuration:
# Manage Jenkins → Configure System → Cloud
# - Name: aws-fleet
# - Region: us-east-1
# - Fleet ID: fleet-123456
# - Labels: ec2-agent
# - FS Root: /home/ec2-user/jenkins
# - Instance Type: m5.large

# Docker agent provisioning:
# Install "Docker" plugin
# Manage Jenkins → Configure System → Cloud
# - Add Docker cloud
# - Docker Host URI: tcp://docker-host:2376
# - Docker Agent Template:
#   - Labels: docker-agent
#   - Docker Image: jenkins/inbound-agent:latest
#   - Remoting Directory: /home/jenkins/agent

# Dynamic agent allocation:
pipeline {
    agent none
    stages {
        stage('Build on Linux') {
            agent {
                node {
                    label 'linux'
                    customWorkspace 'build-linux'
                }
            }
            steps {
                sh 'make build'
                stash includes: 'target/**', name: 'build-artifacts'
            }
        }
        stage('Test on Windows') {
            agent {
                node {
                    label 'windows'
                    customWorkspace 'build-windows'
                }
            }
            steps {
                unstash 'build-artifacts'
                bat 'test.exe'
            }
        }
    }
}

# agent → any (use any available agent)
# agent none (no default agent, must specify per stage)
# agent { label 'linux' } (specific label)
# agent { node { ... } } (advanced options)

# Agent attributes:
agent {
    label 'linux'
    customWorkspace '/tmp/build-${BUILD_NUMBER}'
    reuseNode true  // Reuse same workspace as previous stage
}

# Agent environment variables:
agent {
    kubernetes {
        yaml '''
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: jenkins
spec:
  containers:
  - name: maven
    image: maven:3.8
    env:
    - name: MAVEN_OPTS
          value: "-Xmx2g -XX:+UseG1GC"
    - name: NPM_CONFIG_CACHE
      value: "/home/jenkins/.npm"
    volumeMounts:
    - name: maven-cache
      mountPath: /root/.m2
    - name: node-cache
      mountPath: /home/jenkins/.npm
  volumes:
  - name: maven-cache
    persistentVolumeClaim:
      claimName: jenkins-maven-cache
  - name: node-cache
    emptyDir: {}
---
'''

# Configure auto-scaling:
# Manage Jenkins → Configure System → Cloud → kubernetes
# - Container Cleanup Timeout: 60
# - Max # of instances: 10
# - Idle Minutes: 10 (keep agents for 10 minutes after build)

# Monitor agent usage:
# Install "Metrics" plugin
# Access metrics: http://jenkins:8080/metrics/<api-key>/metrics

# View agent logs:
# Manage Jenkins → Nodes → <Agent> → Log

# Troubleshoot agent connection issues:
# 1. Check network connectivity
# 2. Verify JNLP secret: Manage Jenkins → Nodes → <Agent> → Show secret
# 3. Check agent logs: /opt/jenkins-agent/logs/agent.log
# 4. Verify Java version: java -version (must be Java 11+)
# 5. Check firewall rules: sudo ufw status

# Agent health checks:
# Create health check job running every 5 minutes:
pipeline {
    agent { label 'linux' }
    triggers { cron('H/5 * * * *') }
    stages {
        stage('Health Check') {
            steps {
                sh '''
                    curl -f http://localhost:8080/health || exit 1
                    docker ps || exit 1
                    df -h | grep -v "100%" || exit 1
                '''
            }
        }
    }
}

# Remove offline agents automatically:
# Install "Swarm" plugin for dynamic agents
# Or use Groovy script:
'''
Jenkins.instance.computers.each { computer ->
    if (computer.isOffline() && computer.offlineCauseReason.contains("timed out")) {
        println "Removing offline agent: ${computer.name}"
        computer.doDoDelete()
    }
}
'''


Scenario 14: Jenkins Monitoring & Alerting

Comprehensive monitoring of Jenkins health and build metrics

sequenceDiagram
    participant Jenkins as Jenkins Server
    participant Metrics as Metrics Plugin
    participant Prometheus as Prometheus
    participant Grafana as Grafana Dashboard
    participant AlertManager as AlertManager
    participant Slack as Slack
    participant PagerDuty as PagerDuty

    Jenkins->>Metrics: Collect metrics every 15s
    Metrics->>Prometheus: Expose /metrics endpoint
    Prometheus->>Metrics: Scrape metrics
    Prometheus->>Prometheus: Store time-series data

    Prometheus->>Grafana: Provide metrics data
    Grafana->>Grafana: Render dashboards
    DevOps->>Grafana: View dashboards

    Prometheus->>AlertManager: Trigger alerts
    AlertManager->>AlertManager: Apply alert rules

    alt Queue size > 50
        AlertManager->>Slack: Send warning notification
    end

    alt Master down for 5m
        AlertManager->>PagerDuty: Page on-call engineer
    end

    alt Disk usage > 90%
        AlertManager->>Slack: Critical disk alert
    end

    Note over Prometheus: Proactive monitoring

Code:

# Install monitoring plugins:
# Manage Jenkins → Manage Plugins → Available
# - Metrics
# - Prometheus metrics plugin
# - Monitoring (JavaMelody)
# - Build Failure Analyzer
# - Disk Usage
# - CloudBees Disk Usage Simple
# - Build Time Trend

# Configure Prometheus metrics:
# Manage Jenkins → Configure System → Prometheus
# - Metrics endpoint: /prometheus
# - Default namespace: jenkins
# - Additional rules: checked
# - Job attribute name: jenkins_job
# - Default metrics period: 30

# Sample prometheus.yml configuration:
cat > prometheus.yml << 'EOF'
global:
  scrape_interval: 15s
  evaluation_interval: 15s

scrape_configs:
  - job_name: 'jenkins'
    metrics_path: '/prometheus'
    static_configs:
      - targets: ['jenkins:8080']
    basic_auth:
      username: 'prometheus'
      password: '<api-token>'

  - job_name: 'jenkins-agents'
    metrics_path: '/metrics'
    static_configs:
      - targets: ['agent1:8080', 'agent2:8080']

alerting:
  alertmanagers:
    - static_configs:
        - targets:
          - alertmanager:9093

rule_files:
  - "jenkins_alerts.yml"
EOF

# Create alert rules:
cat > jenkins_alerts.yml << 'EOF'
groups:
- name: jenkins
  rules:
  - alert: JenkinsBuildQueueTooHigh
    expr: jenkins_queue_value > 50
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Jenkins build queue is high ({{ $value }} items)"

  - alert: JenkinsDiskSpaceLow
    expr: jenkins_disk_usage_bytes / jenkins_disk_total_bytes > 0.9
    for: 5m
    labels:
      severity: critical
    annotations:
      summary: "Disk usage is above 90% on Jenkins"

  - alert: JenkinsAgentOffline
    expr: jenkins_agent_online == 0
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Jenkins agent {{ $labels.agent }} is offline"

  - alert: JenkinsMasterDown
    expr: up{job="jenkins"} == 0
    for: 5m
    labels:
      severity: critical
    annotations:
      summary: "Jenkins master is down"

  - alert: JenkinsBuildFailureRate
    expr: rate(jenkins_builds_failed_total[1h]) / rate(jenkins_builds_total[1h]) > 0.2
    for: 15m
    labels:
      severity: warning
    annotations:
      summary: "Build failure rate is above 20%"
EOF

# Configure Grafana dashboards:
# Import dashboard ID: 9964 (Jenkins Performance)
# Or create custom dashboard with:
# - Build duration trends
# - Queue time
# - Agent utilization
# - Disk usage over time
# - Build success rate

# Install and configure JavaMelody:
# Already included in "Monitoring" plugin
# Access at: http://jenkins:8080/monitoring

# Key metrics to monitor:
# - System: CPU, Memory, Disk, Threads
# - Jenkins: Build queue, Executor count, Active builds
# - Builds: Success rate, Duration, Failure trends
# - Agents: Online/offline status, Resource usage

# Create Build Failure Analyzer rules:
# Manage Jenkins → Build Failure Analyzer → Patterns
# Add patterns for common failures:
# - Pattern: "npm ERR! code E401"
#   Description: "NPM authentication failed - check credentials"
#   Category: "Authentication"
# - Pattern: "fatal: unable to access"
#   Description: "Git repository access failed - check network"
#   Category: "SCM"
# - Pattern: "No space left on device"
#   Description: "Disk full - clean up workspace"
#   Category: "Infrastructure"

# Set up email notifications for failures:
pipeline {
    post {
        failure {
            script {
                def failureReason = build.getAction(io.jenkins.plugins.bfa.model.FailureCauseBuildAction.class)?.getFailureCauses()?.collect { it.getName() }?.join(", ")
                emailext (
                    to: "dev-team@example.com",
                    subject: "Build failed: ${env.JOB_NAME} #${env.BUILD_NUMBER}",
                    body: """
                    <h2>Build Failed</h2>
                    <p>Job: ${env.JOB_NAME}</p>
                    <p>Build: #${env.BUILD_NUMBER}</p>
                    <p>Failure Reason: ${failureReason ?: 'Unknown'}</p>
                    <p>Console: ${env.BUILD_URL}console</p>
                    """,
                    mimeType: 'text/html'
                )
            }
        }
    }
}

# Monitor Jenkins logs:
# Create log parser rules:
cat > jenkins-log-rules.txt << 'EOF'
error /ERROR|FATAL/
warning /WARN/
info /INFO/
EOF

# Install "Log Parser" plugin and configure in job
# Post-build Actions → Console output parsing → Parse console log using project rules

# Disk usage monitoring:
# Manage Jenkins → Manage Nodes → Disk Usage
# Install "ThinBackup" plugin to automate cleanup

# Monitor plugin health:
# Install "Plugin Usage" plugin
# Shows which plugins are actively used vs installed

# API for monitoring:
# Get build metrics:
curl -u admin:token http://jenkins:8080/api/json?depth=1&tree=jobs[name,buildable,lastBuild[number,duration,result,timestamp]]

# Get queue metrics:
curl -u admin:token http://jenkins:8080/queue/api/json

# Alert on plugin updates:
# Script to check for updates:
'''
def pluginManager = Jenkins.instance.pluginManager
def updates = pluginManager.getUpdates()
if (updates.size() > 0) {
    println "Plugins needing updates: ${updates.size()}"
    updates.each { plugin ->
        println "${plugin.displayName}: ${plugin.version} -> ${plugin.latest}"
    }
}
'''

# Monitor JVM metrics:
# Add JVM flags to Jenkins startup:
# /etc/default/jenkins
JAVA_ARGS="-Xmx4g -XX:+UseG1GC -XX:+PrintGCDetails -Xloggc:/var/log/jenkins/gc.log"

# Analyze GC logs with GCViewer or HPjmeter

# Health check endpoint:
# Jenkins provides /api/health
# Returns: {"status":"ok"} or {"status":"fail"}

# Create custom health check job:
pipeline {
    agent any
    triggers { cron('H/5 * * * *') }
    stages {
        stage('Health Check') {
            steps {
                script {
                    def health = sh(script: 'curl -s http://localhost:8080/api/health', returnStdout: true)
                    if (!health.contains('"status":"ok"')) {
                        error "Jenkins health check failed: $health"
                    }

                    def queue = sh(script: 'curl -s http://localhost:8080/queue/api/json | jq ".items | length"', returnStdout: true).trim().toInteger()
                    if (queue > 100) {
                        error "Build queue too high: $queue"
                    }
                }
            }
        }
    }
    post {
        failure {
            emailext (
                to: "ops-team@example.com",
                subject: "Jenkins Health Alert",
                body: "Jenkins health check failed!"
            )
        }
    }
}


Scenario 15: Jenkins Backup & Disaster Recovery

Implementing comprehensive backup strategy for Jenkins

sequenceDiagram
    participant Admin as Jenkins Admin
    participant Jenkins as Jenkins Server
    participant ThinBackup as ThinBackup Plugin
    participant S3 as AWS S3 Storage
    participant Snapshot as EBS Snapshot
    participant Restore as Restore Process
    participant Test as Test Instance

    Admin->>Jenkins: Configure ThinBackup
    Jenkins->>ThinBackup: Schedule daily backup
    ThinBackup->>Jenkins: Backup configs, jobs, plugins
    ThinBackup->>S3: Upload encrypted backup

    Admin->>Jenkins: Configure EBS snapshots
    Jenkins->>Snapshot: Daily volume snapshot
    Snapshot->>S3: Store snapshot

    Admin->>Test: Initiate restore
    Test->>S3: Download backup
    Test->>Restore: Restore configurations
    Restore->>Test: Validate restore

    Admin->>Test: Run test build
    Test->>Test: Build successful
    Test-->>Admin: Restore verified

    Note over S3: 30-day retention

Code:

# Install ThinBackup plugin:
# Manage Jenkins → Manage Plugins → Available → Search "ThinBackup" → Install

# Configure ThinBackup:
# Manage Jenkins → ThinBackup → Settings
# - Backup directory: /opt/jenkins-backups
# - Backup schedule: 0 2 * * * (daily at 2 AM)
# - Max backup sets: 30
# - Cleanup schedule: 0 3 * * * (daily at 3 AM)
# - Wait until Jenkins is idle: checked
# - Move old backups to ZIP files: checked
# - Backup 'userContent' folder: checked
# - Backup next build number file: checked
# - Backup build results (builds folder): unchecked (use S3 for artifacts)

# Configure S3 backup:
# Install "AWS S3 Publisher" plugin

# Create S3 backup script:
cat > /opt/jenkins/scripts/backup-to-s3.sh << 'EOF'
#!/bin/bash

JENKINS_HOME="/var/lib/jenkins"
BACKUP_DIR="/opt/jenkins-backups"
DATE=$(date +%Y%m%d_%H%M%S)
S3_BUCKET="jenkins-backups-prod"

# Run ThinBackup
curl -X POST "http://localhost:8080/manage/thinBackup/backupNow" \
  --user admin:$(cat /var/lib/jenkins/secrets/admin-token)

# Wait for backup to complete
sleep 60

# Create tarball
tar -czf $BACKUP_DIR/jenkins_backup_$DATE.tar.gz \
  -C $JENKINS_HOME \
  --exclude builds \
  --exclude workspace \
  --exclude fingerprints \
  --exclude caches \
  .

# Upload to S3 with encryption
aws s3 cp $BACKUP_DIR/jenkins_backup_$DATE.tar.gz \
  s3://$S3_BUCKET/daily/ \
  --sse AES256

# Upload with lifecycle policy
aws s3api put-object-tagging \
  --bucket $S3_BUCKET \
  --key daily/jenkins_backup_$DATE.tar.gz \
  --tagging 'TagSet=[{Key=Retention,Value=30}]'

# Cleanup local backups older than 7 days
find $BACKUP_DIR -name "*.tar.gz" -mtime +7 -delete

# Log backup completion
echo "[$(date)] Backup completed: jenkins_backup_$DATE.tar.gz" >> /var/log/jenkins/backup.log
EOF

chmod +x /opt/jenkins/scripts/backup-to-s3.sh

# Add to crontab:
echo "0 2 * * * root /opt/jenkins/scripts/backup-to-s3.sh" >> /etc/crontab

# Backup job configurations only:
'''
def backupJobConfigs() {
    def jenkinsHome = Jenkins.instance.rootDir
    def backupDir = new File(jenkinsHome, "job-configs-backup")
    backupDir.mkdirs()

    Jenkins.instance.allItems.each { job ->
        def configFile = new File(job.rootDir, "config.xml")
        if (configFile.exists()) {
            def destFile = new File(backupDir, "${job.fullName.replace('/', '_')}.xml")
            destFile.text = configFile.text
            println "Backed up: ${job.fullName}"
        }
    }
}
backupJobConfigs()
'''

# Restore process:
cat > /opt/jenkins/scripts/restore-from-s3.sh << 'EOF'
#!/bin/bash

RESTORE_DATE=${1:-$(date +%Y%m%d)}
S3_BUCKET="jenkins-backups-prod"
JENKINS_HOME="/var/lib/jenkins"

# Stop Jenkins
systemctl stop jenkins

# Download backup
aws s3 cp s3://$S3_BUCKET/daily/jenkins_backup_$RESTORE_DATE.tar.gz \
  /opt/jenkins-backups/

# Extract backup
tar -xzf /opt/jenkins-backups/jenkins_backup_$RESTORE_DATE.tar.gz \
  -C $JENKINS_HOME

# Fix permissions
chown -R jenkins:jenkins $JENKINS_HOME

# Start Jenkins
systemctl start jenkins

echo "Restore completed. Check Jenkins at http://localhost:8080"
EOF

# Docker volume backup:
# If using Docker, backup volumes:
docker run --rm \
  -v jenkins_home:/data \
  -v /opt/backups:/backup \
  alpine tar cvzf /backup/jenkins_home_$(date +%Y%m%d).tar.gz /data

# Restore Docker volume:
docker run --rm \
  -v jenkins_home:/data \
  -v /opt/backups:/backup \
  alpine sh -c "cd /data && tar xvzf /backup/jenkins_home_20241130.tar.gz --strip 1"

# S3 lifecycle policy for cost optimization:
cat > s3-lifecycle.json << 'EOF'
{
    "Rules": [
        {
            "ID": "JenkinsBackupLifecycle",
            "Status": "Enabled",
            "Transitions": [
                {
                    "Days": 7,
                    "StorageClass": "STANDARD_IA"
                },
                {
                    "Days": 30,
                    "StorageClass": "GLACIER"
                }
            ],
            "Expiration": {
                "Days": 365
            }
        }
    ]
}
EOF

aws s3api put-bucket-lifecycle-configuration \
  --bucket jenkins-backups-prod \
  --lifecycle-configuration file://s3-lifecycle.json

# Backup encryption:
# Use KMS for encryption:
aws s3 cp backup.tar.gz s3://bucket/ \
  --sse aws:kms \
  --sse-kms-key-id arn:aws:kms:us-east-1:123456789012:key/12345

# Multi-region backup:
# Sync to another region:
aws s3 sync s3://jenkins-backups-prod s3://jenkins-backups-dr --region us-west-2

# Point-in-time recovery:
# Use EBS snapshots for full system recovery:
aws ec2 create-snapshot \
  --volume-id vol-12345678 \
  --description "Jenkins point-in-time backup $(date)" \
  --tag-specifications 'ResourceType=snapshot,Tags=[{Key=JenkinsBackup,Value=true}]'

# Automated snapshot with Lambda:
# Create Lambda function triggered by CloudWatch Events

# Test restore procedure monthly:
# Create test environment:
docker run -d \
  --name jenkins-test \
  -p 8081:8080 \
  -v jenkins_test_home:/var/jenkins_home \
  jenkins/jenkins:lts

# Run restore:
/opt/jenkins/scripts/restore-from-s3.sh 20241130

# Run test build:
curl -X POST http://localhost:8081/job/test-pipeline/build \
  --user admin:password

# Verify results
# Cleanup:
docker stop jenkins-test && docker rm jenkins-test
docker volume rm jenkins_test_home

# Backup secrets:
# Export credentials:
'''
def backupCredentials() {
    def creds = com.cloudbees.plugins.credentials.CredentialsProvider.all()
    creds.each { store ->
        store.credentials.each { credential ->
            println "Credential: ${credential.id} - ${credential.description}"
        }
    }
}
backupCredentials()
'''

# Document restore runbook:
cat > /opt/jenkins/docs/disaster-recovery.md << 'EOF'
# Jenkins Disaster Recovery

## RTO: 2 hours
## RPO: 24 hours

### Steps:
1. Provision new Jenkins server
2. Install Java 17: apt install openjdk-17-jdk
3. Install Jenkins: follow installation guide
4. Run restore script: /opt/jenkins/scripts/restore-from-s3.sh <date>
5. Verify plugins: Manage Jenkins → Manage Plugins
6. Test critical jobs
7. Update DNS/load balancer

### Contact:
- On-call: +1-555-0123
- Team email: infra-team@example.com
EOF


Scenario 16: Pipeline Performance Optimization

Speeding up builds with caching, artifact management, and parallelization

sequenceDiagram
    participant Pipeline as Jenkins Pipeline
    participant Cache as Build Cache
    participant Registry as Artifact Registry
    participant Parallel as Parallel Stages
    participant Build as Build Stage
    participant Test as Test Stage
    participant Deploy as Deploy Stage

    Pipeline->>Cache: Check for cached dependencies
    Cache->>Pipeline: Return cached artifacts

    Pipeline->>Parallel: Start parallel execution
    Parallel->>Build: Run build (cached deps)
    Parallel->>Test: Run tests (cached deps)

    Build->>Registry: Publish build artifacts
    Test->>Cache: Update test cache

    Pipeline->>Deploy: Deploy only on main branch
    Deploy->>Cache: Clean old cache entries

    Note over Cache: 70% faster builds

Code:

# Maven build cache:
pipeline {
    agent {
        docker {
            image 'maven:3.8-openjdk-17'
            args '-v maven-cache:/root/.m2'
        }
    }
    stages {
        stage('Build') {
            steps {
                sh 'mvn -Dmaven.repo.local=/root/.m2/repository clean compile'
            }
        }
    }
    post {
        always {
            // Cache is persisted in Docker volume
        }
    }
}

# npm cache optimization:
pipeline {
    agent {
        docker {
            image 'node:18'
            args '-v npm-cache:/home/node/.npm'
        }
    }
    stages {
        stage('Install') {
            steps {
                sh '''
                    npm config set cache /home/node/.npm --global
                    npm ci --prefer-offline
                '''
            }
        }
    }
}

# Docker layer caching:
stage('Docker Build') {
    steps {
        sh '''
            # Pull previous image for cache
            docker pull myapp:latest || true

            # Build with cache
            docker build \
              --cache-from myapp:latest \
              --tag myapp:${BUILD_NUMBER} \
              --tag myapp:latest \
              .

            # Push for next build
            docker push myapp:latest
        '''
    }
}

# Jenkins workspace caching:
pipeline {
    agent {
        kubernetes {
            yaml '''
spec:
  containers:
  - name: build
    image: maven:3.8
    volumeMounts:
    - name: workspace-cache
      mountPath: /workspace/.cache
  volumes:
  - name: workspace-cache
    persistentVolumeClaim:
      claimName: jenkins-workspace-cache
'''
        }
    }
    stages {
        stage('Cache Gradle') {
            steps {
                sh '''
                    export GRADLE_USER_HOME=/workspace/.cache/gradle
                    ./gradlew build --build-cache
                '''
            }
        }
    }
}

# Parallelize long-running tasks:
stage('Fast Build') {
    parallel {
        stage('Lint') {
            steps {
                sh 'npm run lint'
            }
        }
        stage('Unit Tests') {
            steps {
                sh 'npm test'
            }
        }
        stage('Type Check') {
            steps {
                sh 'npm run type-check'
            }
        }
    }
}

# Conditional stage execution:
stage('Integration Tests') {
    when {
        anyOf {
            branch 'main'
            branch 'develop'
            changeset "*/src/**/*"
        }
        not {
            changelog '.*\\[skip-ci\\].*'
        }
    }
    steps {
        sh 'npm run test:integration'
    }
}

# Artifact archival optimization:
stage('Archive') {
    steps {
        sh 'tar -czf dist.tar.gz dist/'
        archiveArtifacts artifacts: 'dist.tar.gz', fingerprint: true

        // Use S3 for large artifacts:
        withAWS(credentials: 'aws-build-artifacts') {
            sh '''
                aws s3 cp dist.tar.gz s3://jenkins-artifacts/${JOB_NAME}/${BUILD_NUMBER}/
            '''
        }
    }
}

# Incremental builds:
stage('Incremental Build') {
    steps {
        sh '''
            # Only build changed modules
            git diff --name-only HEAD~1 | grep -q "src/" && ./gradlew build || echo "No changes"
        '''
    }
}

# Pipeline shared library caching:
@Library(value='my-shared-lib@main', changelog=false) _

# Disable changelog for faster library loading

# Jenkinsfile parser cache:
# Manage Jenkins → Configure System → Global Pipeline Libraries
# - Cache fetched versions on controller for quick retrieval

# Adjust executor count:
# Manage Jenkins → Nodes → Configure → # of executors
# Set based on CPU cores: (#cores * 2) + 1

# Use build discarder:
options {
    buildDiscarder(
        logRotator(
            artifactDaysToKeepStr: '7',
            artifactNumToKeepStr: '10',
            daysToKeepStr: '30',
            numToKeepStr: '50'
        )
    )
}

// This keeps builds/logs manageable

# Optimize JVM flags:
# /etc/default/jenkins
JAVA_ARGS="-Xmx8g -XX:+UseG1GC -XX:+UseStringDeduplication -Xloggc:/var/log/jenkins/gc.log"

# Disable unnecessary features:
# Manage Jenkins → Configure System → Global properties
# - Environment variables: Keep minimal
# - Tool locations: Only configure used tools

# Use lightweight checkout:
pipeline {
    agent any
    stages {
        stage('Checkout') {
            steps {
                checkout([
                    $class: 'GitSCM',
                    branches: [[name: '*/main']],
                    doGenerateSubmoduleConfigurations: false,
                    extensions: [[
                        $class: 'SparseCheckoutPaths',
                        sparseCheckoutPaths: [[path: 'src/'], [path: 'Jenkinsfile']]
                    ]],
                    submoduleCfg: [],
                    userRemoteConfigs: [[
                        url: 'https://github.com/myorg/myapp'
                    ]]
                ])
            }
        }
    }
}

# Cache Git submodules:
pipeline {
    stages {
        stage('Setup') {
            steps {
                sh '''
                    git config --global http.postBuffer 524288000
                    git config --global core.compression 0
                    git submodule update --init --recursive --depth 1
                '''
            }
        }
    }
}

# Optimize triggered builds:
# Use quiet period to batch commits:
options {
    quietPeriod(30) // Wait 30 seconds for more commits
}

# Use rate limiting:
// Limit concurrent builds
options {
    throttleJobProperty(
        categories: ['docker-builds'],
        throttleEnabled: true,
        throttleOption: 'category'
    )
}

# Pipeline timeout:
options {
    timeout(time: 30, unit: 'MINUTES')
}

# Fail fast for parallel stages:
stage('Critical Checks') {
    failFast true
    parallel {
        stage('Security Scan') { steps { sh 'trivy scan' } }
        stage('License Check') { steps { sh 'fossa check' } }
    }
}

# Use sticky nodes for pipeline:
options {
    preserveStashes()
    reuseNode true
}

# Optimize Docker builds with BuildKit:
stage('Fast Docker Build') {
    environment {
        DOCKER_BUILDKIT = '1'
    }
    steps {
        sh '''
            docker build \
              --progress=plain \
              --secret id=npmrc,src=.npmrc \
              --cache-from type=local,src=/cache \
              --cache-to type=local,dest=/cache,mode=max \
              -t myapp:${BUILD_NUMBER} .
        '''
    }
}

# Use custom tools for speed:
pipeline {
    tools {
        maven 'Maven-3.8.6' // Pre-installed, faster download
        nodejs 'NodeJS-18'
    }
}


Scenario 17: Multi-Cloud Deployment Strategies

Deploying to AWS, Azure, and GCP from single pipeline

sequenceDiagram
    participant Pipeline as Jenkins Pipeline
    participant AWS as AWS (Primary)
    participant Azure as Azure (Secondary)
    participant GCP as GCP (DR)
    participant Artifact as Artifact Registry
    participant Deploy as Deployment Script

    Pipeline->>Artifact: Build & push container
    Pipeline->>Deploy: Trigger multi-cloud deploy

    Deploy->>AWS: Deploy to EKS (us-east-1)
    Deploy->>Azure: Deploy to AKS (East US)
    Deploy->>GCP: Deploy to GKE (us-central1)

    AWS->>AWS: Run smoke tests
    Azure->>Azure: Run smoke tests
    GCP->>GCP: Run smoke tests

    alt AWS healthy
        Route53->>AWS: 100% traffic
    else AWS degraded
        Route53->>Azure: Failover traffic
    end

    Note over Pipeline: Single pipeline, multi-cloud

Code:

# Multi-cloud pipeline:
pipeline {
    agent any

    environment {
        IMAGE_TAG = "myapp:${BUILD_NUMBER}"
        AWS_REGION = "us-east-1"
        AZURE_REGION = "eastus"
        GCP_REGION = "us-central1"
    }

    stages {
        stage('Build & Push') {
            steps {
                script {
                    // Build multi-arch image
                    sh '''
                        docker buildx build \
                          --platform linux/amd64,linux/arm64 \
                          -t ${IMAGE_TAG} \
                          --push \
                          .
                    '''
                }
            }
        }

        stage('Deploy to AWS') {
            steps {
                withAWS(credentials: 'aws-prod', region: AWS_REGION) {
                    sh '''
                        aws eks update-kubeconfig --name prod-cluster
                        kubectl set image deployment/myapp app=${IMAGE_TAG}
                        kubectl rollout status deployment/myapp
                    '''
                }
            }
            post {
                success {
                    sh '''
                        # Run AWS-specific smoke tests
                        pytest tests/smoke/test_aws.py
                    '''
                }
            }
        }

        stage('Deploy to Azure') {
            steps {
                withCredentials([azureServicePrincipal('azure-sp')]) {
                    sh '''
                        az aks get-credentials --name prod-aks --resource-group prod-rg
                        kubectl set image deployment/myapp app=${IMAGE_TAG}
                        kubectl rollout status deployment/myapp
                    '''
                }
            }
            post {
                success {
                    sh '''
                        # Run Azure-specific tests
                        pytest tests/smoke/test_azure.py
                    '''
                }
            }
        }

        stage('Deploy to GCP') {
            steps {
                withCredentials([file(credentialsId: 'gcp-sa', variable: 'GOOGLE_APPLICATION_CREDENTIALS')]) {
                    sh '''
                        gcloud container clusters get-credentials prod-gke --region ${GCP_REGION}
                        kubectl set image deployment/myapp app=${IMAGE_TAG}
                        kubectl rollout status deployment/myapp
                    '''
                }
            }
            post {
                success {
                    sh '''
                        # Run GCP-specific tests
                        pytest tests/smoke/test_gcp.py
                    '''
                }
            }
        }

        stage('Update DNS') {
            steps {
                script {
                    // Route53 health check
                    def awsHealthy = sh(script: 'check_aws_health.sh', returnStatus: true) == 0
                    def azureHealthy = sh(script: 'check_azure_health.sh', returnStatus: true) == 0

                    if (awsHealthy) {
                        sh 'aws route53 change-resource-record-sets --hosted-zone-id Z12345 --change-batch file://aws-primary.json'
                    } else if (azureHealthy) {
                        sh 'aws route53 change-resource-record-sets --hosted-zone-id Z12345 --change-batch file://azure-failover.json'
                    } else {
                        sh 'aws route53 change-resource-record-sets --hosted-zone-id Z12345 --change-batch file://gcp-dr.json'
                    }
                }
            }
        }
    }
}

# Cloud-specific deployment functions:
// vars/deployAWS.groovy
def call(Map config) {
    def region = config.region ?: 'us-east-1'
    def cluster = config.cluster ?: 'prod-cluster'

    withAWS(credentials: config.credentialsId ?: 'aws', region: region) {
        sh """
            aws eks update-kubeconfig --name ${cluster}
            kubectl apply -f ${config.manifest}
            kubectl rollout status deployment/${config.deployment} -n ${config.namespace}
        """
    }
}

// vars/deployAzure.groovy
def call(Map config) {
    withCredentials([azureServicePrincipal('azure-sp')]) {
        sh """
            az aks get-credentials --name ${config.cluster} --resource-group ${config.resourceGroup}
            kubectl apply -f ${config.manifest}
        """
    }
}

// vars/deployGCP.groovy
def call(Map config) {
    withCredentials([file(credentialsId: 'gcp-sa', variable: 'GOOGLE_APPLICATION_CREDENTIALS')]) {
        sh """
            gcloud auth activate-service-account --key-file=\$GOOGLE_APPLICATION_CREDENTIALS
            gcloud container clusters get-credentials ${config.cluster} --region ${config.region}
            kubectl apply -f ${config.manifest}
        """
    }
}

# Use in pipeline:
stage('Deploy Multi-Cloud') {
    parallel {
        stage('AWS') {
            steps {
                deployAWS(
                    cluster: 'prod-eks',
                    manifest: 'k8s/aws-deployment.yaml',
                    deployment: 'myapp',
                    namespace: 'production'
                )
            }
        }
        stage('Azure') {
            steps {
                deployAzure(
                    cluster: 'prod-aks',
                    resourceGroup: 'prod-rg',
                    manifest: 'k8s/azure-deployment.yaml'
                )
            }
        }
    }
}

# Cloud-agnostic manifest generation:
stage('Generate Manifests') {
    steps {
        script {
            def template = readFile('k8s/deployment-template.yaml')
            def manifest = template
                .replace('{{IMAGE_TAG}}', IMAGE_TAG)
                .replace('{{REPLICAS}}', params.REPLICAS)
                .replace('{{NODE_SELECTOR}}', NODE_SELECTOR)

            writeFile(file: 'k8s/generated-deployment.yaml', text: manifest)
        }
    }
}

# Terraform for cloud resources:
stage('Provision Infrastructure') {
    steps {
        dir('terraform') {
            sh '''
                terraform apply -auto-approve -var="region=${CLOUD_REGION}"
            '''
        }
    }
}

# Use cloud-specific artifact repositories:
stage('Push Artifacts') {
    parallel {
        stage('Push to ECR') {
            steps {
                withAWS(credentials: 'aws') {
                    sh '''
                        aws ecr get-login-password | docker login --username AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com
                        docker tag myapp:${BUILD_NUMBER} 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:${BUILD_NUMBER}
                        docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:${BUILD_NUMBER}
                    '''
                }
            }
        }
        stage('Push to ACR') {
            steps {
                withCredentials([azureServicePrincipal('azure-sp')]) {
                    sh '''
                        az acr login --name myregistry
                        docker tag myapp:${BUILD_NUMBER} myregistry.azurecr.io/myapp:${BUILD_NUMBER}
                        docker push myregistry.azurecr.io/myapp:${BUILD_NUMBER}
                    '''
                }
            }
        }
    }
}

# Blue-green deployment pattern:
def deployBlueGreen(cloud, region, cluster) {
    sh """
        # Deploy green version
        kubectl apply -f k8s/${cloud}/deployment-green.yaml

        # Wait for green to be ready
        kubectl wait --for=condition=available --timeout=300s deployment/myapp-green

        # Run smoke tests
        ./scripts/smoke-test-${cloud}.sh

        # Update service to point to green
        kubectl patch service myapp -p '{"spec":{"selector":{"version":"green"}}}'

        # Scale down blue
        kubectl scale deployment myapp-blue --replicas=0
    """
}

# Canary deployment:
def deployCanary(cloud, region, cluster, canaryPercent = 10) {
    sh """
        # Deploy canary
        kubectl apply -f k8s/${cloud}/deployment-canary.yaml

        # Wait
        kubectl wait --for=condition=available deployment/myapp-canary

        # Split traffic
        ./scripts/set-traffic-split.sh ${cloud} ${canaryPercent}

        # Monitor metrics
        ./scripts/monitor-canary.sh ${cloud}
    """
}

# Rollback functionality:
def rollback(cloud, region, cluster) {
    sh """
        # Rollback deployment
        kubectl rollout undo deployment/myapp

        # Or switch traffic back
        kubectl patch service myapp -p '{"spec":{"selector":{"version":"blue"}}}'
    """
}

# Multi-cloud secrets management:
stage('Retrieve Secrets') {
    parallel {
        stage('AWS Secrets') {
            steps {
                withAWS(credentials: 'aws') {
                    sh '''
                        aws secretsmanager get-secret-value --secret-id prod/database > aws-secrets.json
                        kubectl create secret generic db-creds --from-file=aws-secrets.json --dry-run=client -o yaml | kubectl apply -f -
                    '''
                }
            }
        }
        stage('Azure Key Vault') {
            steps {
                withCredentials([azureServicePrincipal('azure-sp')]) {
                    sh '''
                        az keyvault secret show --name database-password --vault-name prod-vault > azure-secret.json
                    '''
                }
            }
        }
    }
}

# Cost optimization:
stage('Scale Down Dev Environments') {
    when {
        cron('H 18 * * *') // 6 PM daily
    }
    steps {
        script {
            ['aws', 'azure', 'gcp'].each { cloud ->
                sh """
                    ./scripts/scale-down-dev.sh ${cloud}
                """
            }
        }
    }
}


Scenario 18: Custom Jenkins Plugin Development

Extending Jenkins with custom functionality

sequenceDiagram
    participant Dev as Plugin Developer
    participant Maven as Maven Build
    participant Plugin as Custom Plugin
    participant Jenkins as Jenkins Controller
    participant Job as Jenkins Job
    participant User as End User

    Dev->>Maven: mvn archetype:generate
    Maven->>Plugin: Create plugin skeleton
    Dev->>Plugin: Implement custom build step
    Dev->>Maven: mvn package
    Maven->>Plugin: Generate .hpi file

    Dev->>Jenkins: Upload plugin.hpi
    Jenkins->>Plugin: Install and enable

    User->>Job: Configure job
    Job->>Plugin: Add custom step
    Plugin->>Job: Execute custom logic

    Plugin->>User: Show custom UI/report

    Note over Plugin: Tailored enterprise features

Code:

# Set up plugin development environment:
# Install Maven 3.8+ and JDK 11
sudo apt install maven openjdk-11-jdk

# Generate plugin skeleton:
mvn archetype:generate \
  -DgroupId=com.mycompany.jenkins \
  -DartifactId=my-custom-plugin \
  -DarchetypeArtifactId=plugin-hello-world \
  -DarchetypeGroupId=org.jenkins-ci.plugins

cd my-custom-plugin

# Plugin directory structure:
# .
# ├── pom.xml
# ├── src
# │   ├── main
# │   │   ├── java/com/mycompany/jenkins
# │   │   │   └── HelloWorldBuilder.java
# │   │   └── resources
# │   │       ├── index.jelly
# │   │       └── com/mycompany/jenkins
# │   │           └── HelloWorldBuilder
# │   │               ├── config.jelly
# │   │               └── help-name.html
# │   └── test/java/com/mycompany/jenkins
# │       └── HelloWorldBuilderTest.java

# Create custom build step (Builder):
cat > src/main/java/com/mycompany/jenkins/CustomDeployStep.java << 'EOF'
package com.mycompany.jenkins;

import hudson.Launcher;
import hudson.Extension;
import hudson.FilePath;
import hudson.util.FormValidation;
import hudson.model.AbstractProject;
import hudson.model.Run;
import hudson.model.TaskListener;
import hudson.tasks.Builder;
import hudson.tasks.BuildStepDescriptor;
import org.kohsuke.stapler.DataBoundConstructor;
import org.kohsuke.stapler.QueryParameter;

import javax.servlet.ServletException;
import java.io.IOException;

public class CustomDeployStep extends Builder {
    private final String environment;
    private final String version;
    private final boolean autoApprove;

    @DataBoundConstructor
    public CustomDeployStep(String environment, String version, boolean autoApprove) {
        this.environment = environment;
        this.version = version;
        this.autoApprove = autoApprove;
    }

    public String getEnvironment() {
        return environment;
    }

    public String getVersion() {
        return version;
    }

    public boolean getAutoApprove() {
        return autoApprove;
    }

    @Override
    public boolean perform(Run<?,?> build, FilePath workspace, Launcher launcher, TaskListener listener) {
        // Custom logic here
        listener.getLogger().println("Deploying " + version + " to " + environment);

        if (autoApprove) {
            listener.getLogger().println("Auto-approval enabled");
        }

        // Execute deployment command
        try {
            int exitCode = launcher.launch()
                .cmds("deploy.sh", environment, version)
                .stdout(listener)
                .pwd(workspace)
                .join();

            return exitCode == 0;
        } catch (Exception e) {
            listener.error("Deployment failed: " + e.getMessage());
            return false;
        }
    }

    @Extension
    public static final class DescriptorImpl extends BuildStepDescriptor<Builder> {
        @Override
        public boolean isApplicable(Class<? extends AbstractProject> aClass) {
            return true;
        }

        @Override
        public String getDisplayName() {
            return "Custom Deploy Step";
        }

        public FormValidation doCheckEnvironment(@QueryParameter String value)
                throws IOException, ServletException {
            if (value.length() == 0) {
                return FormValidation.error("Environment is required");
            }
            if (!value.matches("^(dev|staging|prod)$")) {
                return FormValidation.warning("Environment should be dev, staging, or prod");
            }
            return FormValidation.ok();
        }
    }
}
EOF

# Create UI configuration (config.jelly):
cat > src/main/resources/com/mycompany/jenkins/CustomDeployStep/config.jelly << 'EOF'
<?jelly escape-by-default='true'?>
<j:jelly xmlns:j="jelly:core" xmlns:st="jelly:stapler" xmlns:d="jelly:define">
  <f:entry title="Environment" field="environment">
    <f:select />
  </f:entry>
  <f:entry title="Version" field="version">
    <f:textbox />
  </f:entry>
  <f:entry title="Auto Approve" field="autoApprove">
    <f:checkbox />
  </f:entry>
</j:jelly>
EOF

# Help file for field:
cat > src/main/resources/com/mycompany/jenkins/CustomDeployStep/help-environment.html << 'EOF'
<div>
  Target environment for deployment. Must be one of: dev, staging, prod
</div>
EOF

# Build plugin:
mvn clean package

# Install plugin in Jenkins:
# Manage Jenkins → Manage Plugins → Advanced → Upload → Select target/my-custom-plugin.hpi

# Reload configuration or restart Jenkins

# Use plugin in pipeline:
pipeline {
    agent any
    stages {
        stage('Deploy') {
            steps {
                customDeployStep(
                    environment: 'staging',
                    version: '1.0.0',
                    autoApprove: true
                )
            }
        }
    }
}

# Create custom publisher (post-build action):
cat > src/main/java/com/mycompany/jenkins/CustomReportPublisher.java << 'EOF'
package com.mycompany.jenkins;

import hudson.model.Action;
import hudson.model.Run;
import hudson.model.Result;
import hudson.model.Project;
import hudson.tasks.Publisher;
import hudson.tasks.Recorder;
import hudson.Extension;
import hudson.FilePath;
import hudson.Launcher;
import hudson.model.TaskListener;
import org.kohsuke.stapler.DataBoundConstructor;

import java.io.IOException;
import java.util.Collection;
import java.util.Collections;

public class CustomReportPublisher extends Recorder {
    private final String reportPath;

    @DataBoundConstructor
    public CustomReportPublisher(String reportPath) {
        this.reportPath = reportPath;
    }

    public String getReportPath() {
        return reportPath;
    }

    @Override
    public boolean perform(Run<?,?> build, FilePath workspace, Launcher launcher, TaskListener listener) 
            throws InterruptedException, IOException {

        listener.getLogger().println("Publishing custom report from " + reportPath);

        // Copy report to build artifacts
        FilePath report = workspace.child(reportPath);
        if (report.exists()) {
            FilePath artifact = new FilePath(build.getArtifactsDir());
            report.copyTo(artifact.child("custom-report.html"));

            // Add action to show in UI
            build.addAction(new CustomReportAction(build, artifact));
        }

        return true;
    }

    @Override
    public Action getProjectAction(Project<?,?> project) {
        return new CustomReportProjectAction(project);
    }

    @Extension
    public static final class DescriptorImpl extends hudson.model.Descriptor<Publisher> {
        public String getDisplayName() {
            return "Publish Custom Report";
        }
    }
}

class CustomReportAction implements Action {
    private final Run<?,?> build;
    private final FilePath reportPath;

    public CustomReportAction(Run<?,?> build, FilePath reportPath) {
        this.build = build;
        this.reportPath = reportPath;
    }

    public String getIconFileName() {
        return "document.png";
    }

    public String getDisplayName() {
        return "Custom Report";
    }

    public String getUrlName() {
        return "custom-report";
    }

    public String getReportContent() throws IOException {
        // Read and return report content
        return reportPath.readToString();
    }
}
EOF

# Create custom CLI command:
cat > src/main/java/com/mycompany/jenkins/CustomCLICommand.java << 'EOF'
package com.mycompany.jenkins;

import hudson.Extension;
import hudson.cli.CLICommand;
import hudson.model.Items;
import hudson.model.Job;
import org.kohsuke.args4j.Argument;

@Extension
public class CustomCLICommand extends CLICommand {
    @Override
    public String getShortDescription() {
        return "Trigger custom deployment";
    }

    @Argument(metaVar = "JOB", usage = "Job to deploy", required = true)
    public Job<?,?> job;

    @Argument(metaVar = "ENV", usage = "Environment", required = true)
    public String environment;

    @Override
    protected int run() throws Exception {
        stdout.println("Deploying job: " + job.getDisplayName() + " to environment: " + environment);

        // Trigger deployment
        job.scheduleBuild2(0).get();

        return 0;
    }
}
EOF

# Debug plugin development:
# Run Jenkins with plugin locally:
mvn hpi:run -Djetty.port=8080

# Jenkins will start at http://localhost:8080 with plugin installed

# Write tests:
cat > src/test/java/com/mycompany/jenkins/CustomDeployStepTest.java << 'EOF'
package com.mycompany.jenkins;

import hudson.model.FreeStyleBuild;
import hudson.model.FreeStyleProject;
import hudson.model.Result;
import org.junit.Rule;
import org.junit.Test;
import org.jvnet.hudson.test.JenkinsRule;

import java.io.IOException;

import static org.junit.Assert.*;

public class CustomDeployStepTest {
    @Rule
    public JenkinsRule jenkins = new JenkinsRule();

    @Test
    public void testConfig() {
        CustomDeployStep step = new CustomDeployStep("staging", "1.0.0", true);
        assertEquals("staging", step.getEnvironment());
        assertEquals("1.0.0", step.getVersion());
        assertTrue(step.getAutoApprove());
    }

    @Test
    public void testBuild() throws Exception {
        FreeStyleProject project = jenkins.createFreeStyleProject();
        CustomDeployStep step = new CustomDeployStep("dev", "1.0.0", false);
        project.getBuildersList().add(step);

        FreeStyleBuild build = project.scheduleBuild2(0).get();
        assertEquals(Result.SUCCESS, build.getResult());
    }
}
EOF

# Run tests:
mvn test

# Package for distribution:
mvn clean install
# Creates target/my-custom-plugin.hpi and .jpi

# Publish to Jenkins Artifactory:
mvn deploy -DrepositoryId=jenkins-releases

# Version management:
# Use Incrementals for automatic versioning:
# Add to pom.xml:
'''
<properties>
  <jenkins.version>2.361.4</jenkins.version>
  <gitHubRepo>myorg/my-custom-plugin</gitHubRepo>
</properties>
<scm>
  <connection>scm:git:git://github.com/${gitHubRepo}.git</connection>
  <developerConnection>scm:git:git@github.com:${gitHubRepo}.git</developerConnection>
  <url>https://github.com/${gitHubRepo}</url>
  <tag>${scmTag}</tag>
</scm>
'''

# Create incremental release:
mvn incrementals:incrementalify
mvn release:prepare release:perform

# Update site documentation:
mkdir -p src/main/webapp
cat > src/main/webapp/index.jelly << 'EOF'
<?jelly escape-by-default='true'?>
<div>
  <h1>My Custom Plugin</h1>
  <p>This plugin provides custom deployment steps for enterprise Jenkins.</p>
  <h2>Features</h2>
  <ul>
    <li>Environment-aware deployment</li>
    <li>Auto-approval workflows</li>
    <li>Custom reporting</li>
  </ul>
</div>
EOF

# Security considerations:
// Use @Restricted annotations
import org.kohsuke.accmod.Restricted;
import org.kohsuke.accmod.restrictions.NoExternalUse;

@Restricted(NoExternalUse.class)
public void internalMethod() {
    // Not accessible from outside
}

// Use ACL checks:
import jenkins.model.Jenkins;
import hudson.security.ACL;
import hudson.security.ACLContext;

try (ACLContext context = ACL.as(ACL.SYSTEM)) {
    // Run with SYSTEM permissions
}

// Sanitize user inputs:
import org.apache.commons.lang.StringEscapeUtils;

String safeInput = StringEscapeUtils.escapeHtml(userInput);


Scenario 19: Jenkins Configuration as Code (JCasC)

Managing Jenkins entirely through configuration files

sequenceDiagram
    participant Git as Git Repository
    participant YAML as JCasC YAML Files
    participant Jenkins as Jenkins Controller
    participant Plugin as JCasC Plugin
    participant Config as Jenkins Config

    Git->>YAML: Store jenkins.yaml
    YAML->>Plugin: Load credentials, plugins, jobs
    Plugin->>Jenkins: Apply configuration

    Dev->>Git: Update jenkins.yaml
    Git->>Jenkins: Webhook triggers
    Jenkins->>Plugin: Reload configuration

    Plugin->>Jenkins: Update settings
    Jenkins-->>Dev: Configuration applied

    Note over YAML: Infrastructure as Code

Code:

# Install JCasC plugin:
# Manage Jenkins → Manage Plugins → Available → Search "Configuration as Code" → Install

# Create jenkins.yaml:
cat > jenkins.yaml << 'EOF'
jenkins:
  systemMessage: "Jenkins configured by Configuration as Code"
  numExecutors: 4
  scmCheckoutRetryCount: 2
  mode: NORMAL

  securityRealm:
    ldap:
      configurations:
        - server: ldap.example.com:389
          rootDN: dc=example,dc=com
          managerDN: cn=admin,dc=example,dc=com
          managerPasswordSecret: "${LDAP_ADMIN_PASSWORD}"
          userSearch: uid={0}
          groupSearchBase: ou=groups

  authorizationStrategy:
    projectMatrix:
      permissions:
        - "Overall/Administer:admin"
        - "Overall/Read:authenticated"
        - "Job/Build:developer"
        - "Job/Read:authenticated"

  credentials:
    system:
      domainCredentials:
        - credentials:
            - string:
                id: "github-token"
                secret: "${GITHUB_TOKEN}"
                description: "GitHub Access Token"
            - usernamePassword:
                id: "docker-credentials"
                username: dockerbot
                password: "${DOCKER_PASSWORD}"
                description: "Docker Hub credentials"
            - sshUserPrivateKey:
                id: "github-ssh-key"
                username: git
                privateKey: "${GITHUB_SSH_KEY}"

  clouds:
    - kubernetes:
        name: "kubernetes"
        serverUrl: "https://kubernetes.default"
        namespace: "jenkins-agents"
        jenkinsUrl: "http://jenkins.jenkins.svc.cluster.local:8080"
        jenkinsTunnel: "jenkins-agent.jenkins.svc.cluster.local:50000"
        credentialsId: "kube-service-account"
        webSocket: true
        containerCapStr: "10"
        templates:
          - name: "maven-agent"
            namespace: "jenkins-agents"
            label: "maven"
            serviceAccount: "jenkins"
            containers:
              - name: "jnlp"
                image: "jenkins/inbound-agent:latest"
                args: "^${computer.jnlpmac} ^${computer.name}"
                resourceLimitCpu: "2000m"
                resourceLimitMemory: "4Gi"
              - name: "maven"
                image: "maven:3.8-openjdk-17"
                command: "sleep"
                args: "infinity"
                resourceLimitCpu: "4000m"
                resourceLimitMemory: "8Gi"

  tool:
    maven:
      installations:
        - name: "Maven-3.8.6"
          home: "/opt/maven"
    jdk:
      installations:
        - name: "JDK-17"
          home: "/opt/jdk-17"
    git:
      installations:
        - name: "Git"
          home: "/usr/bin/git"

  plugins:
    required:
      monitoring: true
      configuration-as-code: true
      pipeline: true
      credentials-binding: true
      kubernetes: true

unclassified:
  globallibraries:
    libraries:
      - name: "shared-library"
        retriever:
          modernSCM:
            scm:
              git:
                remote: "https://github.com/myorg/jenkins-shared-library.git"
        defaultVersion: "main"
        implicit: true
        cachingConfiguration:
          refreshTimeMinutes: 60

  mailer:
    smtpHost: "smtp.example.com"
    smtpPort: "587"
    useSsl: false
    useTls: true
    authentication:
      username: "jenkins@example.com"
      password: "${SMTP_PASSWORD}"
    charset: "UTF-8"

jobs:
  - script: >
      pipelineJob('seed-job') {
          description('Job that seeds other jobs from Git')
          definition {
              cpsScm {
                  scm {
                      git {
                          remote { 
                              url('https://github.com/myorg/jenkins-jobs.git') 
                              credentials('github-token')
                          }
                          branches('main')
                          scriptPath('Jenkinsfile.seed')
                      }
                  }
              }
          }
          triggers {
              scm('H/15 * * * *')
          }
      }

  - script: |
      folder('production') {
          description('Production deployment jobs')
      }

      pipelineJob('production/deploy-api') {
          definition {
              cps {
                  script(readFileFromWorkspace('pipelines/deploy-api.groovy'))
                  sandbox()
              }
          }
          parameters {
              stringParam('VERSION', '', 'Version to deploy')
              choiceParam('ENVIRONMENT', ['staging', 'prod'], 'Target environment')
          }
          credentials {
              stringBinding {
                  variable('API_KEY')
                  credentialsId('deploy-api-key')
              }
          }
      }

security:
  queueItemAuthenticator:
    authenticators:
    - global:
        strategy: "anonymous"

  GlobalJobDslSecurityConfiguration:
    useScriptSecurity: false

  ScriptApproval:
    approvedSignatures:
      - "method java.util.List size"
      - "staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods contains java.lang.Object"

# Load secrets from environment:
cat > .env << 'EOF'
export LDAP_ADMIN_PASSWORD=secret123
export GITHUB_TOKEN=ghp_xxxxxxxxxxxx
export DOCKER_PASSWORD=yyyyyyyyyyyy
export SMTP_PASSWORD=zzzzzzzzzzzz
EOF

source .env

# Start Jenkins with JCasC:
docker run -d \
  --name jenkins-casc \
  -p 8080:8080 \
  -v jenkins_home:/var/jenkins_home \
  -e CASC_JENKINS_CONFIG=/var/jenkins_home/jenkins.yaml \
  -e LDAP_ADMIN_PASSWORD="${LDAP_ADMIN_PASSWORD}" \
  -e GITHUB_TOKEN="${GITHUB_TOKEN}" \
  jenkins/jenkins:lts

# Validate configuration:
docker exec jenkins-casc java -jar /var/jenkins_home/war/WEB-INF/jenkins-cli.jar \
  -s http://localhost:8080 \
  -auth admin:$(docker exec jenkins-casc cat /var/jenkins_home/secrets/initialAdminPassword) \
  casc-schema

# Reload configuration without restart:
curl -X POST "http://jenkins:8080/configuration-as-code/reload" \
  --user admin:password

# Export current configuration:
curl -u admin:password "http://jenkins:8080/configuration-as-code/export" > jenkins-current.yaml

# Multi-file configuration:
# Organize by sections:
# jenkins/
#   ├── jenkins-core.yaml
#   ├── credentials.yaml
#   ├── cloud-agents.yaml
#   ├── tools.yaml
#   └── jobs.yaml

# Start with multiple files:
docker run -d \
  -e CASC_JENKINS_CONFIG="/var/jenkins_home/casc_configs" \
  -v jenkins_home:/var/jenkins_home \
  -v ./casc_configs:/var/jenkins_home/casc_configs \
  jenkins/jenkins:lts

# Use includes in jenkins.yaml:
jenkins:
  - include:
      file: credentials.yaml

  - include:
      file: cloud-agents.yaml

# Dynamic credentials from Vault:
unclassified:
  credentials:
    system:
      domainCredentials:
        - credentials:
            - vaultStringCredential:
                id: "db-password"
                path: "secret/database/password"
                key: "password"
                vaultUrl: "https://vault.example.com"
                credentialId: "vault-token"

# Job DSL in JCasC:
jobs:
  - script: |
      def repos = ["api", "web", "worker"]
      repos.each { repo ->
          pipelineJob("build/${repo}") {
              definition {
                  cpsScm {
                      scm {
                          git {
                              remote {
                                  url("https://github.com/myorg/${repo}.git")
                              }
                          }
                      }
                  }
              }
          }
      }

# Validate YAML before applying:
docker run --rm -i \
  -v $(pwd)/jenkins.yaml:/jenkins.yaml \
  jenkins/jenkins:lts \
  bash -c "jenkins-cli casc-validate -f /jenkins.yaml"

# GitOps workflow:
# 1. Store jenkins.yaml in Git
# 2. Jenkins polls Git or webhook triggers
# 3. Job runs: casc-reload
# 4. Configuration applied

# GitOps pipeline:
pipeline {
    agent any
    triggers {
        pollSCM('H/15 * * * *')
    }
    stages {
        stage('Validate') {
            steps {
                sh 'jenkins-cli casc-validate -f jenkins.yaml'
            }
        }
        stage('Apply') {
            steps {
                sh 'jenkins-cli casc-reload -f jenkins.yaml'
            }
        }
    }
}


Scenario 20: Advanced Troubleshooting & Diagnostics

Debugging complex Jenkins issues in production

sequenceDiagram
    participant Admin as Administrator
    participant Support as Support Engineer
    participant Jenkins as Jenkins Controller
    participant Agent as Build Agent
    participant Job as Failed Job
    participant Log as Log Analyzer
    participant Dump as Thread Dump
    participant Metrics as Monitoring

    Admin->>Jenkins: Job is hanging
    Jenkins->>Dump: Generate thread dump
    Dump->>Admin: Identify blocking threads

    Admin->>Log: Analyze logs with Log Parser
    Log->>Admin: Show error patterns

    Admin->>Metrics: Check system metrics
    Metrics->>Admin: High CPU/memory

    Admin->>Agent: Check agent health
    Agent->>Admin: Disk full

    Support->>Job: Replay with debug
    Job->>Support: Detailed diagnostics

    Support->>Jenkins: Apply fix
    Jenkins-->>Admin: System recovered

    Note over Jenkins: Root cause identified

Code:

# Generate thread dump:
# Option 1: Using jenkins-cli
java -jar jenkins-cli.jar -s http://jenkins:8080 -auth admin:token thread-dump > threaddump.txt

# Option 2: Using jstack (on Jenkins server)
sudo -u jenkins jstack $(pgrep -f jenkins.war) > threaddump.txt

# Option 3: From Jenkins UI
# Manage Jenkins → Manage Nodes → Master → Thread Dump

# Analyze thread dump:
# Look for BLOCKED threads
# Identify deadlock patterns:
'''
"Executor #1 for master" #123 prio=5 os_prio=0
   java.lang.Thread.State: BLOCKED (on object monitor)
    at hudson.model.Queue.maintain(Queue.java:436)
    - waiting to lock <0x00000000f0001234> (a hudson.model.Queue)
'''

# Interactive script console debugging:
# Manage Jenkins → Script Console

# Check queue state:
'''
def q = Jenkins.instance.queue
println "Queue size: ${q.items.length}"
q.items.each { item ->
    println "Blocked: ${item.task.name} - ${item.why}"
}
'''

# Check agent connectivity:
'''
Jenkins.instance.computers.each { computer ->
    println "${computer.name}: ${computer.isOnline() ? 'ONLINE' : 'OFFLINE'}"
    println "  Launch supported: ${computer.isLaunchSupported()}"
    println "  Offline cause: ${computer.offlineCause}"
}
'''

# Find large builds consuming disk:
'''
def builds = []
Jenkins.instance.allItems.each { job ->
    job.builds.each { build ->
        def size = build.artifactsDir?.directorySize() ?: 0
        if (size > 1024*1024*100) { // > 100MB
            builds << [name: build.fullDisplayName, size: size]
        }
    }
}
builds.sort { -it.size }.each { 
    println "${it.name}: ${it.size / 1024 / 1024} MB"
}
'''

# Analyze build logs:
# Install "Logfilesizechecker" plugin
# Set max log size: Manage Jenkins → Configure System → Maximum log size (MB): 100

# Tail logs from CLI:
tail -f /var/log/jenkins/jenkins.log

# Configure loggers dynamically:
# Manage Jenkins → System Log → Add new log recorder
# Name: QueueDebug
# Logger: hudson.model.Queue
# Log level: FINEST

# Debug pipeline syntax:
pipeline {
    agent any
    stages {
        stage('Debug') {
            steps {
                script {
                    // Print environment variables (sanitized)
                    sh 'env | sort | grep -v PASSWORD | grep -v SECRET'

                    // Print workspace contents
                    sh 'find . -type f -name "*.log" | head -20'

                    // Print available tools
                    println "Tools: ${tool.list().join(', ')}"

                    // Print credentials IDs (without values)
                    println "Credentials: ${com.cloudbees.plugins.credentials.CredentialsProvider.all().collect { it.id }}"
                }
            }
        }
    }
}

# Memory leak detection:
# Install "Memory Map" plugin
# Access: http://jenkins:8080/systemInfo

# Check for leaked classes:
jmap -histo:live $(pgrep -f jenkins.war) | head -50

# Heap dump for deep analysis:
jmap -dump:format=b,file=/tmp/jenkins.hprof $(pgrep -f jenkins.war)

# Analyze with Eclipse MAT or YourKit

# Agent disconnection troubleshooting:
# Enable JNLP debug logging:
# In agent launch command:
java -Djava.util.logging.config.file=logging.properties -jar agent.jar

# logging.properties:
handlers=java.util.logging.ConsoleHandler
java.util.logging.ConsoleHandler.level=ALL
hudson.remoting.level=ALL

# Check agent clock sync:
# Master and agents must have synchronized clocks
# Otherwise JNLP tokens expire

# Plugin conflict resolution:
# Jenkins CLI to list all plugins:
java -jar jenkins-cli.jar -s http://jenkins:8080 -auth admin:token list-plugins

# Disable problematic plugin:
java -jar jenkins-cli.jar -s http://jenkins:8080 -auth admin:token disable-plugin <plugin-name>

# Safe mode startup:
# Restart Jenkins with:
java -jar jenkins.war --enable-future-java --override-war=""

# This disables all plugins temporarily

# Corrupted job config fix:
# Remove job from disk:
mv /var/lib/jenkins/jobs/myjob/config.xml /tmp/myjob-config.xml.bak

# Recreate via DSL or UI

# Or fix XML manually:
'''
<?xml version='1.1' encoding='UTF-8'?>
<project>
  <actions/>
  <description/>
  <!-- Minimal config -->
  <keepDependencies>false</keepDependencies>
  <properties/>
  <scm class="hudson.scm.NullSCM"/>
  <canRoam>true</canRoam>
  <disabled>false</disabled>
  <blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
  <blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
  <triggers/>
  <concurrentBuild>false</concurrentBuild>
  <builders/>
  <publishers/>
  <buildWrappers/>
</project>
'''

# Slow startup diagnosis:
# Check plugin initialization timing:
# System Properties → hudson.pluginLoadTimer: true

# Check logs:
grep -i "pluginmanager" /var/log/jenkins/jenkins.log

# Orphaned process cleanup:
# Find processes holding resources:
lsof | grep deleted

# Kill hanging processes:
pkill -f "npm install"

# Workspace cleanup failures:
pipeline {
    post {
        always {
            script {
                try {
                    cleanWs()
                } catch (Exception e) {
                    // Force cleanup
                    sh "sudo rm -rf ${WORKSPACE}"
                }
            }
        }
    }
}

# Database lock issues (if using external DB):
# Check for locked tables:
SELECT * FROM information_schema.processlist WHERE COMMAND != 'Sleep';

# Kill long-running queries:
KILL <process_id>;

# Pipeline replay for debugging:
# Build → Replay → Edit parsed steps
# Add debug echo statements

# Support bundle generation:
# Install "Support Core" plugin
# Manage Jenkins → Generate Support Bundle
# Or via CLI:
java -jar jenkins-cli.jar -s http://jenkins:8080 support > support-bundle.zip

# Contains: logs, config, thread dumps, system info

# Test SMTP connectivity:
pipeline {
    stage('Test SMTP') {
        steps {
            sh '''
                echo "Testing SMTP..." | mail -s "Test" -S smtp=smtp.gmail.com:587 \
                -S smtp-use-starttls -S smtp-auth=login \
                -S smtp-auth-user=jenkins@example.com \
                -S smtp-auth-password=${SMTP_PASSWORD} \
                admin@example.com
            '''
        }
    }
}

# Monitor file descriptors:
watch "lsof | grep jenkins | wc -l"

# Increase limits:
# /etc/security/limits.conf
jenkins soft nofile 4096
jenkins hard nofile 8192

# Check zombie processes:
ps aux | grep -w Z

# Restart strategy:
# Systemd service with restart:
'''
[Service]
Type=notify
ExecStart=/usr/bin/java -jar jenkins.war
Restart=on-failure
RestartSec=30
StartLimitInterval=0
StartLimitBurst=5
'''

# Rollback Jenkins version:
sudo systemctl stop jenkins
sudo apt-get install jenkins=2.361.4
sudo systemctl start jenkins

# Prevent automatic updates:
echo "jenkins hold" | sudo apt-mark hold


Quick Reference: Essential Commands

Command Description Level
jenkins-cli.jar -s http://localhost:8080/ help List all available CLI commands Beginner
jenkins-cli.jar -s http://localhost:8080/ create-job <job-name> Create a new job Beginner
jenkins-cli.jar -s http://localhost:8080/ build <job-name> Trigger a build Beginner
jenkins-cli.jar -s http://localhost:8080/ get-job <job-name> Retrieve job configuration Beginner
jenkins-cli.jar -s http://localhost:8080/ update-job <job-name> Update job configuration Beginner
jenkins-cli.jar -s http://localhost:8080/ delete-job <job-name> Delete a job Beginner
jenkins-cli.jar -s http://localhost:8080/ list-jobs List all jobs Beginner
jenkins-cli.jar -s http://localhost:8080/ console <build-number> View build console output Beginner
jenkins-cli.jar -s http://localhost:8080/ stop-build <job-name> <build-number> Stop a running build Intermediate
jenkins-cli.jar -s http://localhost:8080/ quiet-down Put Jenkins in quiet mode Intermediate
jenkins-cli.jar -s http://localhost:8080/ cancel-quiet-down Cancel quiet mode Intermediate
jenkins-cli.jar -s http://localhost:8080/ restart Restart Jenkins Intermediate
jenkins-cli.jar -s http://localhost:8080/ safe-restart Restart safely Intermediate
jenkins-cli.jar -s http://localhost:8080/ reload-configuration Reload configuration from disk Intermediate
jenkins-cli.jar -s http://localhost:8080/ install-plugin <plugin-name> Install a plugin Intermediate
jenkins-cli.jar -s http://localhost:8080/ uninstall-plugin <plugin-name> Uninstall a plugin Intermediate
jenkins-cli.jar -s http://localhost:8080/ list-plugins List installed plugins Intermediate
jenkins-cli.jar -s http://localhost:8080/ create-credentials-by-xml <credentials-xml> Create credentials Advanced
jenkins-cli.jar -s http://localhost:8080/ update-credentials-by-xml <credentials-xml> Update credentials Advanced
jenkins-cli.jar -s http://localhost:8080/ delete-credentials <credentials-id> Delete credentials Advanced
jenkins-cli.jar -s http://localhost:8080/ list-credentials List credentials Advanced
jenkins-cli.jar -s http://localhost:8080/ create-node <node-name> Create a new node Advanced
jenkins-cli.jar -s http://localhost:8080/ delete-node <node-name> Delete a node Advanced
jenkins-cli.jar -s http://localhost:8080/ list-nodes List all nodes Advanced

Pro Tips for All Levels

  1. Always use Jenkinsfile: Define your pipeline in a Jenkinsfile for version control.
  2. Use Blue Ocean: For a modern UI experience and better pipeline visualization.
  3. Label everything: Use labels for organization and selection.
  4. Set resource limits: Prevent noisy neighbor issues and ensure stability.
  5. Use credentials: Never hardcode secrets in jobs or pipelines.
  6. Health checks are critical: Always implement health checks for agents.
  7. Monitor everything: Use Prometheus and Grafana for observability.
  8. Backup Jenkins: Critical for instance recovery.
  9. Keep Jenkins updated: Stay on supported versions.
  10. Use GitOps: Tools like ArgoCD or Flux for declarative deployments.
  11. Network policies: Secure agent-to-master communication.
  12. Pipeline security standards: Follow restricted security policies.
  13. Use shared libraries: Reuse pipeline code across multiple projects.
  14. Automate testing: Integrate Jenkins with CI/CD pipelines and testing frameworks.
  15. Use remote execution: Leverage Jenkins agents for distributed builds.
  16. Backup job configurations: Regularly back up your job configurations to prevent data loss.

Happy continuous integration and delivery! 🚀