Appearance
Multi-Stage CI/CD Pipeline with GitHub Actions
Implementing deployment pipelines using GitHub Actions, reusable workflows, and stage-based orchestration
Related Concepts: Continuous Integration | Continuous Delivery
Introduction
This guide shows you how to implement a multi-stage Continuous Delivery pipeline using GitHub Actions. We'll build on the deployment pipeline pattern from the CD concept article, implementing the commit, build, acceptance, and deploy stages using GitHub's workflow orchestration features.
Why Multi-Stage Pipelines?
From the Continuous Delivery article, recall the deployment pipeline's core principle:
"Each commit to mainline triggers the pipeline. The change flows through a series of stages, and with each passing stage, you gain higher confidence in that revision of the code."
The pipeline balances speed (fast feedback) with thoroughness (comprehensive verification) by organizing checks into stages:
- Early stages catch most problems rapidly
- Later stages provide slower but more thorough verification
- Parallel execution optimizes speed within each stage
The Four Pipeline Stages
Our pipelines follow the canonical deployment pipeline structure:
1. Commit Stage
Purpose: Fast feedback that code is fundamentally sound.
Speed: Under 10 minutes (the absolute maximum from Continuous Integration).
Activities:
- Format checking
- Linting
- Type checking
- Unit tests
- Architecture boundary validation (e.g., dependency-cruiser)
- Integration tests (run in parallel with fast checks)
Critical Rule: All commit stage jobs run in parallel with no dependencies between them. This provides the fastest possible feedback.
2. Build Stage
Purpose: Create immutable, deployable artifacts.
Activities:
- Docker image builds (APIs/backends)
- Frontend application builds (creating tarballs or static assets)
- Tag artifacts with version information (git SHA, build number)
- Push artifacts to registries (ECR, S3, etc.)
Critical Rule: Build stage jobs depend on all commit stage jobs passing. We never build artifacts from code that hasn't passed basic quality checks.
3. Acceptance Stage
Purpose: Verify software meets business requirements end-to-end.
Activities:
- Deploy artifacts to test environment
- Run automated acceptance tests (E2E tests with Playwright, Cypress, etc.)
- Test complete user workflows
- Verify integration with external systems
Critical Rule: Tests run against the same artifacts created in the build stage. We're testing what we'll deploy to production.
4. Deploy Stage
Purpose: Promote verified artifacts to production.
Activities:
- Deploy the exact same artifacts from build stage
- Run database migrations
- Update infrastructure configuration
- Perform smoke tests
Critical Rule: This is Build Once, Deploy Everywhere. We deploy the same immutable artifact that passed all previous stages.
GitHub Actions Implementation
Reusable Workflows Pattern
GitHub Actions supports reusable workflows via the workflow_call trigger. This pattern enables:
- Composability - Small, focused workflows that do one thing well
- Maintainability - Update the implementation in one place
- Testability - Each workflow can be tested independently
- Readability - The orchestrator workflow reads like a deployment pipeline diagram
A reusable workflow looks like this:
yaml
# .github/workflows/api_fast_checks.yml
name: API Fast Checks
on:
workflow_call:
inputs:
directory:
required: true
type: string
description: "Directory containing the API service"
jobs:
fast_checks:
name: Fast Checks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
cache-dependency-path: ${{ inputs.directory }}/package-lock.json
- name: Install dependencies
working-directory: ${{ inputs.directory }}
run: npm ci
- name: Check formatting
working-directory: ${{ inputs.directory }}
run: npm run format:check
- name: Lint
working-directory: ${{ inputs.directory }}
run: npm run lint:ci
- name: Type check
working-directory: ${{ inputs.directory }}
run: npm run type-check
- name: Check architecture boundaries
working-directory: ${{ inputs.directory }}
run: npm run depcruise
- name: Run unit tests
working-directory: ${{ inputs.directory }}
run: npm run test:ciKey Features:
workflow_calltrigger makes this reusableinputsparameter allows caller to customize behavior- Self-contained - does one thing (fast checks) well
- Can be called from multiple orchestrator workflows
The Orchestrator Pattern
The orchestrator workflow (build.yml) composes reusable workflows into a multi-stage pipeline using the needs keyword:
yaml
# .github/workflows/build.yml
name: Build and Deploy
on:
push:
branches:
- main
concurrency:
group: build-${{ github.ref }}
cancel-in-progress: false
jobs:
# ============================================================
# STAGE 1: COMMIT - Fast Checks & Integration Tests
# All jobs in this stage run in PARALLEL (no dependencies)
# ============================================================
api_fast_checks:
name: API Fast Checks
uses: ./.github/workflows/api_fast_checks.yml
with:
directory: services/backend
api_integration_tests:
name: API Integration Tests
uses: ./.github/workflows/api_integration_tests.yml
with:
directory: services/backend
ui_fast_checks:
name: UI Fast Checks
uses: ./.github/workflows/ui_fast_checks.yml
with:
directory: services/frontend
# ============================================================
# STAGE 2: BUILD - Create Artifacts
# All jobs need ALL commit stage jobs to pass
# ============================================================
build_backend:
name: Build Backend
needs: [api_fast_checks, api_integration_tests, ui_fast_checks]
uses: ./.github/workflows/build_backend.yml
with:
directory: services/backend
secrets: inherit
build_frontend:
name: Build Frontend
needs: [api_fast_checks, api_integration_tests, ui_fast_checks]
uses: ./.github/workflows/build_frontend.yml
with:
directory: services/frontend
artifact_prefix: my-app
secrets: inherit
# ============================================================
# STAGE 3: ACCEPTANCE - End-to-End Testing
# Needs ALL build jobs to complete
# ============================================================
acceptance_tests:
name: Acceptance Tests
needs: [build_backend, build_frontend]
uses: ./.github/workflows/acceptance_tests.yml
with:
backend_image_tag: ${{ needs.build_backend.outputs.image_tag }}
frontend_tarball_sha: ${{ needs.build_frontend.outputs.tarball_sha }}
secrets: inherit
# ============================================================
# STAGE 4: DEPLOY - Deploy to Production
# Needs acceptance tests to pass
# ============================================================
deploy_production:
name: Deploy to Production
needs: [acceptance_tests, build_backend, build_frontend]
uses: ./.github/workflows/deploy_backend.yml
with:
environment: production
image_tag: ${{ needs.build_backend.outputs.image_tag }}
secrets: inheritUnderstanding the needs Keyword
The needs keyword creates the stage structure. GitHub Actions will:
- Analyze dependencies - Build a directed acyclic graph (DAG) of job dependencies
- Run jobs in parallel - Execute all jobs without dependencies simultaneously
- Gate progression - Jobs with
needsonly run if all dependencies succeed - Pass data - Jobs can access outputs from their dependencies
Example Execution Flow:
STAGE 1 (Parallel - no dependencies):
├── api_fast_checks ──┐
├── api_integration_tests ──┤
└── ui_fast_checks ──┤
│ All must pass
↓
STAGE 2 (Parallel - needs all Stage 1):
├── build_backend ──┐
└── build_frontend ──┤
│ All must pass
↓
STAGE 3 (needs all Stage 2):
└── acceptance_tests ──┐
│ Must pass
↓
STAGE 4 (needs Stage 3 + builds for artifacts):
└── deploy_productionThis structure ensures:
- Fast feedback - Commit stage jobs run immediately in parallel
- Quality gates - No builds without passing tests
- Artifact validation - Acceptance tests validate what we'll deploy
- Safe deployments - Only deploy artifacts that passed all tests
Passing Data Between Stages
Jobs need to communicate artifact information between stages. Use outputs:
Defining Outputs in Reusable Workflows
yaml
# .github/workflows/build_backend.yml
name: Build Backend
on:
workflow_call:
inputs:
directory:
required: true
type: string
outputs:
image_tag:
description: "The Docker image tag (git SHA)"
value: ${{ jobs.build.outputs.image_tag }}
image_url:
description: "The full Docker image URL"
value: ${{ jobs.build.outputs.image_url }}
secrets:
AWS_REGION:
required: true
AWS_ROLE_ARN:
required: true
ECR_REPOSITORY:
required: true
jobs:
build:
name: Build and Push Docker Image
runs-on: ubuntu-latest
outputs:
image_tag: ${{ steps.tags.outputs.sha_short }}
image_url: ${{ steps.tags.outputs.full_image_url }}
steps:
- uses: actions/checkout@v4
- name: Generate tags
id: tags
run: |
SHA_SHORT=$(git rev-parse --short HEAD)
BUILD_NUMBER=${{ github.run_number }}
echo "sha_short=$SHA_SHORT" >> $GITHUB_OUTPUT
echo "build_number=$BUILD_NUMBER" >> $GITHUB_OUTPUT
echo "full_image_url=${{ secrets.ECR_REPOSITORY }}:$SHA_SHORT" >> $GITHUB_OUTPUT
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Login to Amazon ECR
uses: aws-actions/amazon-ecr-login@v2
- name: Build and push
uses: docker/build-push-action@v5
with:
context: ${{ inputs.directory }}
push: true
tags: |
${{ secrets.ECR_REPOSITORY }}:${{ steps.tags.outputs.sha_short }}
${{ secrets.ECR_REPOSITORY }}:build-${{ steps.tags.outputs.build_number }}
${{ secrets.ECR_REPOSITORY }}:latest
cache-from: type=gha
cache-to: type=gha,mode=maxUsing Outputs in the Orchestrator
yaml
deploy_production:
name: Deploy to Production
needs: [acceptance_tests, build_backend, build_frontend]
uses: ./.github/workflows/deploy_backend.yml
with:
environment: production
# Use the image_tag output from build_backend job
image_tag: ${{ needs.build_backend.outputs.image_tag }}
secrets: inheritWhy This Matters:
This is the implementation of Build Once, Deploy Everywhere. The deployment stage uses the exact image_tag that was:
- Built in the build stage
- Tested in the acceptance stage
- Now deployed to production
We never rebuild. We deploy the validated artifact.
Build Once, Deploy Everywhere
This principle from Continuous Delivery is critical for reliability:
"Create deployable packages/artifacts once in the commit stage, then use that same artifact for all subsequent stages."
Why It Matters
Without Build Once, Deploy Everywhere:
- Build API in build stage → Tag:
abc123 - Acceptance tests trigger another build → Tag:
def456 - Deploy to production triggers another build → Tag:
ghi789 - Problem: You deployed
ghi789but only testeddef456!
With Build Once, Deploy Everywhere:
- Build API in build stage → Tag:
abc123 - Acceptance tests use
abc123artifact - Deploy to production uses
abc123artifact - Confidence: What you tested is what you deployed
Implementation Patterns
For Docker Images
yaml
# Build stage creates image with SHA tag
build_backend:
outputs:
image_tag: "a3f8c9d" # git SHA short
# Acceptance stage uses that exact tag
acceptance_tests:
with:
backend_image_tag: ${{ needs.build_backend.outputs.image_tag }}
# Tests run against: myapp:a3f8c9d
# Deploy stage deploys that exact tag
deploy_production:
with:
image_tag: ${{ needs.build_backend.outputs.image_tag }}
# Deploys: myapp:a3f8c9dFor Frontend Tarballs
yaml
# Build stage creates tarball and uploads to S3
build_frontend:
steps:
- run: npm run build
- run: tar -czf dist-${{ steps.sha.outputs.sha_short }}.tar.gz dist/
- run: aws s3 cp dist-*.tar.gz s3://my-bucket/builds/
outputs:
tarball_sha: ${{ steps.sha.outputs.sha_short }}
# Acceptance stage downloads and serves that tarball
acceptance_tests:
steps:
- run: aws s3 cp s3://my-bucket/builds/dist-${{ inputs.tarball_sha }}.tar.gz .
- run: tar -xzf dist-*.tar.gz
- run: npx serve dist &
- run: npm run test:e2e
# Deploy stage deploys that same tarball
deploy_frontend:
steps:
- run: aws s3 cp s3://my-bucket/builds/dist-${{ inputs.tarball_sha }}.tar.gz .
- run: tar -xzf dist-*.tar.gz
- run: aws s3 sync dist/ s3://my-production-bucket/Artifact Versioning Strategy
Tag artifacts with multiple identifiers for traceability:
yaml
tags: |
${{ secrets.ECR_REPOSITORY }}:${{ steps.sha.outputs.sha_short }}
${{ secrets.ECR_REPOSITORY }}:build-${{ github.run_number }}
${{ secrets.ECR_REPOSITORY }}:latestWhy three tags?
- SHA tag (
a3f8c9d) - Links artifact to exact source code commit - Build tag (
build-1234) - Links artifact to the pipeline run that created it - Latest tag (
latest) - Convenient pointer to most recent artifact
The SHA tag is the "source of truth" passed between stages. The others are for convenience and debugging.
Integration Tests in Commit Stage
Notice in our orchestrator that integration tests run in parallel with fast checks, not after them:
yaml
# STAGE 1: Both run in parallel
api_fast_checks:
# Runs: format, lint, type-check, unit tests
api_integration_tests:
# Runs: tests with real Postgres databaseThis differs from the traditional "unit tests → integration tests" sequence. Here's why:
Why Parallel?
From Continuous Integration:
"The build must be fast—Martin Fowler recommends the commit stage complete in under 10 minutes."
If your integration tests take 5 minutes and your fast checks take 5 minutes, running them sequentially takes 10 minutes (the maximum). Running them in parallel takes 5 minutes.
Integration Test Implementation
Integration tests typically need service containers (databases, caches, etc.):
yaml
# .github/workflows/api_integration_tests.yml
name: API Integration Tests
on:
workflow_call:
inputs:
directory:
required: true
type: string
jobs:
integration_tests:
name: Integration Tests
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16-alpine
env:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
cache-dependency-path: ${{ inputs.directory }}/package-lock.json
- name: Install dependencies
working-directory: ${{ inputs.directory }}
run: npm ci
- name: Run migrations
working-directory: ${{ inputs.directory }}
run: npm run migrate:latest
env:
DATABASE_URL: postgresql://testuser:testpass@localhost:5432/testdb
- name: Run integration tests
working-directory: ${{ inputs.directory }}
run: npm run test:integration
env:
DATABASE_URL: postgresql://testuser:testpass@localhost:5432/testdbKey Points:
servicessection spins up Postgres container- Health checks ensure database is ready before tests run
- Migrations run before tests (testing with real schema)
- Tests run against real database, not mocks
Pull Request Workflows
The Continuous Delivery article notes that modern teams adapt the pipeline for pull requests:
"Before Merge (On Pull Request): Run commit stage checks, run subset of acceptance tests (smoke tests), provide fast feedback to developer, gate the merge—only allow green builds to merge."
PR Workflow Structure
yaml
# .github/workflows/pull_request.yml
name: Pull Request Checks
on:
pull_request:
branches:
- main
jobs:
# Run commit stage checks only - no builds or deployments
api_fast_checks:
name: API Fast Checks
uses: ./.github/workflows/api_fast_checks.yml
with:
directory: services/backend
api_integration_tests:
name: API Integration Tests
uses: ./.github/workflows/api_integration_tests.yml
with:
directory: services/backend
ui_fast_checks:
name: UI Fast Checks
uses: ./.github/workflows/ui_fast_checks.yml
with:
directory: services/frontendKey Differences from Build Workflow:
- Only runs commit stage checks
- No build, acceptance, or deploy stages
- Gates merge - PR cannot merge unless all checks pass
This provides fast feedback (under 10 minutes) while ensuring code quality before integration to mainline.
Secrets and Security
Reusable workflows can access secrets in two ways:
1. Inherit All Secrets
yaml
my_job:
uses: ./.github/workflows/reusable_workflow.yml
secrets: inherit # Pass all secrets from caller to reusable workflow2. Pass Specific Secrets
yaml
# In reusable workflow
on:
workflow_call:
secrets:
AWS_ROLE_ARN:
required: true
ECR_REPOSITORY:
required: true
# In orchestrator
my_job:
uses: ./.github/workflows/reusable_workflow.yml
secrets:
AWS_ROLE_ARN: ${{ secrets.AWS_ROLE_ARN }}
ECR_REPOSITORY: ${{ secrets.ECR_REPOSITORY }}Best Practice: Use secrets: inherit for internal workflows in the same repository. Use explicit secret passing when:
- You want to document exactly which secrets are required
- You're calling workflows from different repositories
- You need different secrets for different environments
AWS Credential Configuration
Use OIDC (OpenID Connect) for AWS authentication instead of long-lived credentials:
yaml
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: ${{ secrets.AWS_REGION }}Benefits:
- No long-lived credentials stored in GitHub
- AWS IAM role defines permissions
- Automatic credential rotation
- Audit trail in AWS CloudTrail
Performance Optimization
Caching Dependencies
Always cache dependencies to speed up builds:
yaml
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
cache-dependency-path: ${{ inputs.directory }}/package-lock.jsonThis caches node_modules between runs. Subsequent runs skip npm install if package-lock.json hasn't changed.
Docker Layer Caching
Use GitHub Actions cache for Docker builds:
yaml
- name: Build and push
uses: docker/build-push-action@v5
with:
context: ${{ inputs.directory }}
push: true
tags: ${{ secrets.ECR_REPOSITORY }}:${{ steps.tags.outputs.sha_short }}
cache-from: type=gha
cache-to: type=gha,mode=maxImpact: Can reduce Docker build times from minutes to seconds when only source code changes (base image layers are cached).
Parallelization
Maximize parallel execution within stages:
yaml
# Good: Three jobs run simultaneously
jobs:
api_fast_checks:
# no needs
api_integration_tests:
# no needs
ui_fast_checks:
# no needs
# Bad: Jobs run sequentially
jobs:
api_fast_checks:
# no needs
api_integration_tests:
needs: [api_fast_checks] # unnecessary dependency
ui_fast_checks:
needs: [api_integration_tests] # unnecessary dependencyOnly add needs when there's a real dependency (like needing an artifact from a previous job).
Common Pitfalls
1. Rebuilding in Deploy Stage
Wrong:
yaml
deploy_production:
steps:
- run: docker build -t myapp:latest . # Building again!
- run: docker push myapp:latest
- run: kubectl set image deployment/app app=myapp:latestRight:
yaml
deploy_production:
needs: [build_backend]
with:
image_tag: ${{ needs.build_backend.outputs.image_tag }} # Using existing artifact
steps:
- run: kubectl set image deployment/app app=myapp:${{ inputs.image_tag }}2. Acceptance Tests Not Using Build Artifacts
Wrong:
yaml
acceptance_tests:
steps:
- run: docker build -t myapp:test . # Building a different image!
- run: docker compose up
- run: npm run test:e2eRight:
yaml
acceptance_tests:
needs: [build_backend]
with:
backend_image_tag: ${{ needs.build_backend.outputs.image_tag }}
steps:
- run: docker pull myapp:${{ inputs.backend_image_tag }} # Using build artifact
- run: docker compose up
- run: npm run test:e2e3. Unnecessary Sequential Dependencies
Wrong:
yaml
ui_fast_checks:
needs: [api_fast_checks] # No real dependency!This makes ui_fast_checks wait for api_fast_checks to complete, even though UI checks don't need anything from API checks. This slows down your commit stage unnecessarily.
Right:
yaml
ui_fast_checks:
# No needs - runs in parallel with api_fast_checks4. Missing secrets: inherit
Wrong:
yaml
build_backend:
uses: ./.github/workflows/build_backend.yml
# Missing secrets!Reusable workflows don't automatically get secrets. This will fail when build_backend.yml tries to access AWS credentials.
Right:
yaml
build_backend:
uses: ./.github/workflows/build_backend.yml
secrets: inheritSummary
Implementing a multi-stage CI/CD pipeline with GitHub Actions requires:
Understand the stages from Continuous Delivery:
- Commit: Fast feedback (<10 mins)
- Build: Create immutable artifacts
- Acceptance: Validate artifacts end-to-end
- Deploy: Deploy validated artifacts
Use reusable workflows (
workflow_call) for:- Composability and maintainability
- Single responsibility per workflow
- Reuse across orchestrators (build.yml and pull_request.yml)
Orchestrate with
needsto:- Create stage structure
- Maximize parallelization within stages
- Gate progression between stages
Pass outputs between jobs to:
- Implement Build Once, Deploy Everywhere
- Ensure artifact traceability
- Deploy exactly what was tested
Optimize for speed:
- Cache dependencies (npm, Docker layers)
- Parallelize within stages
- Only add
needswhen truly required
This pattern scales from simple projects (API + frontend) to complex systems (multiple APIs, multiple UIs, multiple deployment targets) while maintaining the core CD principles of fast feedback, quality gates, and safe deployments.
Further Reading
- Continuous Integration - The foundation practice
- Continuous Delivery - The deployment pipeline pattern
- GitHub Actions: Reusing Workflows - Official documentation
- GitHub Actions: Using jobs - Job dependencies with
needs