← Back to all blogs
Continuous Integration & Deployment Guide – Real World Example
Sat Feb 28 20268 minIntermediate

Continuous Integration & Deployment Guide – Real World Example

A comprehensive, practical guide to building a production‑ready CI/CD pipeline, featuring architecture diagrams, code snippets, and real‑world implementation tips.

#ci/cd#devops#github actions#jenkins#docker#kubernetes

Introduction

In today's fast‑moving software landscape, delivering features quickly and safely has become a competitive advantage. Continuous Integration (CI) and Continuous Deployment (CD) are the pillars of modern DevOps, turning manual, error‑prone release cycles into automated, repeatable processes.

This guide walks you through a real‑world CI/CD pipeline built on open‑source tools that scale from a small team to enterprise‑grade workloads. By the end of the article, you will be able to:

  • Visualize the end‑to‑end CI/CD architecture.
  • Write a GitHub Actions workflow that builds, tests, and pushes Docker images.
  • Deploy a container to a Kubernetes cluster using blue‑green strategy.
  • Apply best practices that keep pipelines fast, secure, and maintainable.

The content is SEO‑optimized for keywords such as continuous integration, continuous deployment, CI/CD pipeline example, and GitHub Actions Docker.

Understanding CI/CD Architecture

A well‑designed CI/CD architecture separates concerns, provides clear feedback loops, and isolates environments. The diagram below (described in words) outlines the core components:

  1. Source Repository - GitHub stores code, configuration, and pipeline definitions.
  2. CI Runner - GitHub Actions (or Jenkins) executes jobs in isolated containers.
  3. Artifact Registry - Docker Hub or GitHub Container Registry hosts built images.
  4. Testing Suite - Unit, integration, and security scans run during the CI phase.
  5. Staging Environment - A disposable Kubernetes namespace where the new version is validated.
  6. Production Cluster - The live system receives traffic via a blue‑green or canary rollout.
  7. Observability Stack - Prometheus, Grafana, and Loki collect metrics and logs for rapid rollback.

Why Each Layer Matters

  • Isolation protects the main branch from broken code.
  • Immutability (Docker images) guarantees that what was tested is exactly what runs in production.
  • Versioned Artifacts enable traceability from commit SHA to deployed container.
  • Automated Gates (security scans, performance budgets) keep quality high without manual sign‑off.

The infrastructure can be represented as IaC (Infrastructure as Code) using Terraform for cloud resources and Helm for Kubernetes manifests. Keeping the pipeline definition alongside the application code ensures that the CI/CD process evolves together with the product.

Step‑by‑Step Real World Example

Below is a practical implementation that ties together the architecture described earlier. The example assumes a simple Node.js API that is containerized and deployed to an Amazon EKS cluster.

1. Repository Layout

my-api/ ├─ .github/workflows/ci-cd.yml # CI/CD pipeline definition ├─ Dockerfile # Container build instructions ├─ helm/ │ └─ my-api/ # Helm chart for Kubernetes ├─ src/ # Application source code ├─ tests/ # Jest test suite └─ README.md

2. Dockerfile (Build Stage)

dockerfile

syntax=docker/dockerfile:1.4

FROM node:18-alpine AS builder WORKDIR /app COPY package*.json ./ RUN npm ci COPY . . RUN npm run build && npm prune --production

FROM node:18-alpine AS runtime WORKDIR /app COPY --from=builder /app/dist ./dist COPY --from=builder /app/node_modules ./node_modules EXPOSE 3000 CMD ["node", "dist/index.js"]

The multi‑stage build keeps the final image lightweight (<80 MB) and eliminates dev‑dependencies.

3. GitHub Actions Workflow (ci-cd.yml)

yaml name: CI/CD Pipeline on: push: branches: [ main ] pull_request: branches: [ main ]

jobs: build-test: runs-on: ubuntu-latest permissions: contents: read packages: write steps: - name: Checkout repository uses: actions/checkout@v4

  - name: Set up Node.js 18
    uses: actions/setup-node@v4
    with:
      node-version: 18
      cache: npm

  - name: Install dependencies
    run: npm ci

  - name: Run unit tests
    run: npm test
    env:
      CI: true

  - name: Lint and security scan
    run: |
      npm run lint
      npm audit --audit-level=high

  - name: Build Docker image
    run: |
      docker build -t ghcr.io/${{ github.repository_owner }}/my-api:${{ github.sha }} .

  - name: Log in to GitHub Container Registry
    uses: docker/login-action@v3
    with:
      registry: ghcr.io
      username: ${{ github.actor }}
      password: ${{ secrets.GITHUB_TOKEN }}

  - name: Push image
    run: |
      docker push ghcr.io/${{ github.repository_owner }}/my-api:${{ github.sha }}

  - name: Export image tag
    id: img
    run: echo "tag=${{ github.sha }}" >> $GITHUB_OUTPUT

deploy-staging: needs: build-test runs-on: ubuntu-latest environment: staging steps: - name: Checkout repo uses: actions/checkout@v4

  - name: Configure AWS credentials
    uses: aws-actions/configure-aws-credentials@v4
    with:
      aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
      aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
      aws-region: us-east-1

  - name: Set up kubectl
    uses: azure/setup-kubectl@v3
    with:
      version: 'v1.27.0'

  - name: Update Helm values with new image
    run: |
      helm upgrade --install my-api ./helm/my-api \
        --namespace staging \
        --set image.repository=ghcr.io/${{ github.repository_owner }}/my-api \
        --set image.tag=${{ steps.img.outputs.tag }} \
        --wait

Key points:

  • The pipeline runs on every push to main and on PRs, ensuring early defect detection.
  • Linting and npm audit act as quality gates.
  • Docker image tags use the commit SHA, providing traceability.
  • Deployment to a staging namespace leverages Helm for reproducible releases.

4. Helm Chart Snippet (values.yaml)

yaml replicaCount: 2 image: repository: ghcr.io/your-org/my-api pullPolicy: IfNotPresent tag: "latest" service: type: ClusterIP port: 80 resources: limits: cpu: "500m" memory: "256Mi" requests: cpu: "250m" memory: "128Mi"

5. Blue‑Green Deployment Strategy

To achieve zero‑downtime releases, the pipeline can be extended with a blue‑green approach:

  1. Deploy the new version to a temporary namespace green.
  2. Run integration tests against the green endpoint.
  3. Switch the Kubernetes Service selector from blue to green.
  4. Keep the old version running for a short grace period before cleanup.

The following step adds a manual approval gate for production:

yaml deploy-production: needs: deploy-staging runs-on: ubuntu-latest environment: production steps: - name: Wait for manual approval uses: peter-evans/slash-command-dispatch@v2 with: token: ${{ secrets.GITHUB_TOKEN }} command: "/approve-production"

  - name: Deploy green to production
    run: |
      helm upgrade --install my-api ./helm/my-api \
        --namespace production \
        --set image.tag=${{ steps.img.outputs.tag }} \
        --wait

By combining automated testing with a human checkpoint, risk is minimized while still delivering speed.

Best Practices & Common Pitfalls

Keep Pipelines Small and Focused

  • Single Responsibility: Each job should do one thing-build, test, or deploy. This simplifies debugging and improves caching.
  • Parallelism: Run unit tests, linting, and security scans concurrently to reduce total runtime.

Secure Secrets

  • Store credentials in the CI platform's secret store (GitHub Secrets, Jenkins Credentials).
  • Use short‑lived tokens (OIDC tokens) instead of static passwords where possible.

Version Your Artifacts

  • Tag Docker images with both commit SHA and semantic version (e.g., v1.2.3-${SHA}).
  • Save generated Helm values as build artifacts for auditability.

Optimize Caching

  • Leverage Docker layer cache by copying package*.json before source files.
  • Enable actions/cache for node_modules and npm registry.

Monitor and Roll Back Quickly

  • Configure health checks and readiness probes in Kubernetes.
  • Use Prometheus alerts that fire on error‑rate spikes within the first minutes after a release.

Common Pitfalls to Avoid

PitfallSymptomRemedy
Hard‑coded image tagsStale containers in prod after code changeUse dynamic tags derived from Git SHA or build number
Skipping security scansVulnerable dependencies reach productionEnforce npm audit as a required status check
Long‑running jobsSlow feedback loop, developers lose confidenceSplit tests, use matrix builds, Cache dependencies
Deploying to prod without approvalAccidental outage due to failed testsAdd a manual approval step or implement automated canary analysis

Adhering to these guidelines results in pipelines that are fast, reliable, and secure.

FAQs

Q1: Do I need a separate CI server if I use GitHub Actions?

A: No. GitHub Actions provides hosted runners that eliminate the need for on‑prem CI infrastructure. For organizations with specialized compliance requirements, self‑hosted runners can be added as needed.

Q2: How can I test a Kubernetes deployment without affecting the live cluster?

A: Use a preview namespace created on‑the‑fly (e.g., pr‑${{ github.event.number }}). Deploy the Helm chart there, run integration tests, and delete the namespace automatically after the workflow finishes. This isolates every PR.

Q3: What is the difference between “Continuous Deployment” and “Continuous Delivery”?

A: Continuous Delivery ensures that every change is ready to be released, but a human may trigger the final deployment. Continuous Deployment extends this by automatically pushing every validated change to production without manual intervention.

Q4: Can I reuse the same pipeline for multiple microservices?

A: Yes. Parameterize the workflow using inputs or environment variables (e.g., service-name). Store each service’s Helm chart in its own directory and reference the same CI template.

Q5: How do I roll back a failed deployment?

A: Since images are immutable and tagged with commit SHA, you can simply re‑deploy the previous tag. In Kubernetes, kubectl rollout undo deployment/<name> restores the prior ReplicaSet, and Helm can be instructed with --set image.tag=<previous‑sha>.

Conclusion

Building a robust CI/CD pipeline is no longer a luxury-it is a baseline expectation for modern software teams. By aligning architecture, automation, and observability, you can achieve:

  1. Rapid feedback that catches defects early.
  2. Consistent, reproducible releases via immutable Docker images.
  3. Zero‑downtime deployments through blue‑green or canary strategies.
  4. Secure operations by treating secrets and dependencies as first‑class citizens.

The real‑world example presented-GitHub Actions + Docker + Helm + Kubernetes-demonstrates how a few well‑structured YAML files and Docker best practices translate into a production‑grade pipeline. Apply the best‑practice checklist, avoid common pitfalls, and continuously iterate on your workflow to keep pace with evolving business needs.

Remember, the pipeline itself is code; store it alongside your application, version it, and review it with the same rigor as any other software artifact. When you do, CI/CD becomes a competitive advantage rather than a maintenance burden.