Skip to content

Modernizing Legacy Hapi Applications

Applying legacy modernization patterns specifically to hapi.js applications

Introduction

This article applies the concepts from Working with Legacy Code specifically to hapi.js applications. If you haven't read that article yet, start there for foundational principles.

Historically, Synapse Studios built several Node.js APIs using hapi (see Hapi (Legacy) for implementation details). While these systems served their purpose well, evolving standards around modularity, testing, and clean architecture mean these codebases now benefit from incremental modernization.

This article presents three proven approaches we've used across multiple projects to modernize hapi applications while continuing to deliver business value. Each approach addresses the same core problems—tight coupling, unclear boundaries, and difficulty testing—but with different trade-offs.

The Hapi Challenge

Legacy hapi applications typically share these characteristics:

What Worked Well

  • Entity-based organization - Code organized by domain concept (users, orders, products) rather than technical layer (controllers, models, services)
  • Established patterns - Joi for validation, Bookshelf for ORM, Electrolyte for dependency injection
  • Working systems - Production-proven code serving real users

What Needs Improvement

  • Tight coupling - Services directly instantiate dependencies, making testing difficult
  • No module boundaries - Any code can import anything else, creating hidden dependencies
  • Framework leakage - Business logic mixed with hapi route handlers and Bookshelf models
  • Database-centric - Bookshelf models treated as domain entities, coupling domain to persistence
  • Limited testability - Tests require full framework and database setup

The question isn't whether these systems have value (they do), but rather: How do we incrementally improve them while continuing to ship features?

Three Modernization Approaches

Based on our experience across multiple projects, we've identified three distinct approaches for modernizing hapi applications. Each has been proven in production and offers different trade-offs.

Approach 1: Strangler Fig to New Technology

Pattern: Migrate to a new framework alongside the existing hapi application.

How It Works

The new application runs as a separate service on a different port. A routing layer (typically nginx) directs traffic to either the legacy hapi app or the new service based on the route. Over time, more routes migrate to the new service until the hapi app can be retired.

Infrastructure pattern:

  • Legacy hapi app: api.example.com → port 9000
  • New service: backend.example.com → port 9002
  • Routing via nginx or API gateway

Migration sequence:

  1. New service starts small (health checks, one new feature)
  2. Deploy alongside legacy app
  3. Route new endpoints to new service
  4. Gradually migrate existing features
  5. Legacy routes decrease over time
  6. Eventually retire hapi app

When to Use This Approach

Choose this when:

  • Major architectural shift is needed
  • Team has expertise in target framework (e.g., NestJS)
  • Separate deployment/scaling is acceptable
  • Long-term investment in new technology stack

Characteristics in practice:

  • Clean architecture (domain, application, infrastructure)
  • Module-based organization with public interfaces
  • Adapter pattern for inter-module communication
  • Pure unit tests of business logic (no framework)
  • Integration tests with real database
  • Repository pattern abstracts persistence

Trade-offs:

  • ✅ Clean break from legacy patterns
  • ✅ Modern framework with active community
  • ✅ Clear separation during migration
  • ❌ Maintain two codebases temporarily
  • ❌ Deployment complexity increases
  • ❌ Team needs new framework expertise

Approach 2: Plugin-Based Modular Monolith with Domain Events

Pattern: Stay within hapi, reorganize into modules using hapi's plugin system with domain events for inter-module communication.

How It Works

Each bounded context becomes a hapi plugin with clear boundaries. Modules communicate through domain events rather than direct method calls, creating loose coupling and eventual consistency between modules.

Module structure: Each module is a hapi plugin with:

  • Plugin registration function
  • Clean architecture layers (domain, application, infrastructure)
  • Domain entities with event emission
  • Event handlers for other modules' events

Event-driven communication:

  • Aggregate roots emit domain events when state changes
  • Events represent past facts ("OrderPlaced", "InventoryReserved")
  • Other modules subscribe to relevant events
  • Cross-module operations are eventually consistent

Example module flow:

  1. HTTP request hits module's route handler
  2. Handler delegates to use case
  3. Use case operates on domain entities
  4. Entity emits domain event
  5. Repository saves entity and publishes events
  6. Other modules' event handlers react

When to Use This Approach

Choose this when:

  • Team is committed to staying with hapi
  • Domain complexity benefits from event-driven architecture
  • Eventual consistency is acceptable for cross-module operations
  • Team wants strong Domain-Driven Design patterns

Characteristics in practice:

  • Hapi plugins with explicit registration
  • Aggregate root base class managing events
  • Domain events for inter-module communication
  • Event emitter injected at infrastructure level
  • Schmervice or similar for service registration
  • Clear separation of domain, application, infrastructure

Trade-offs:

  • ✅ Stay within familiar hapi ecosystem
  • ✅ Strong boundaries through events
  • ✅ Loose coupling between modules
  • ✅ Supports complex domains well
  • ❌ Eventual consistency adds complexity
  • ❌ Debugging across events can be harder
  • ❌ Requires understanding of DDD concepts

Approach 3: Plugin-Based with Explicit Public Interfaces

Pattern: Stay within hapi, reorganize into modules using hapi's plugin system with explicit public interfaces.

How It Works

Each module is a hapi plugin that exposes a clear public API via server.expose(). Other modules access these APIs through request.server.plugins.moduleName. This creates explicit contracts between modules while keeping implementation details private.

Module structure: Each module is a hapi plugin with:

  • Plugin registration function
  • Public interface definition (TypeScript interface)
  • Service factory implementing the interface
  • Private implementation details
  • server.expose() to publish public API

Communication pattern: Modules call each other directly through exposed interfaces:

typescript
// In route handler
const { orderModule } = request.server.plugins;
await orderModule.completeOrder(orderId);

When to Use This Approach

Choose this when:

  • Team is committed to staying with hapi
  • Prefer explicit contracts over event-driven complexity
  • Synchronous communication fits the domain
  • Want simpler mental model than events

Characteristics in practice:

  • Hapi plugins with public service interfaces
  • TypeScript interfaces define contracts
  • Use cases contain business logic
  • Repository pattern for data access
  • Clear public vs. private separation

Trade-offs:

  • ✅ Stay within familiar hapi ecosystem
  • ✅ Simpler than event-driven approach
  • ✅ Clear, explicit contracts
  • ✅ Easier debugging (direct calls)
  • ❌ Tighter coupling than events
  • ❌ Synchronous calls may limit scalability
  • ❌ Circular dependencies possible if not careful

Common Patterns Across All Approaches

Despite different strategies, all three approaches share core principles:

Modularization

Clear module boundaries: Every module defines what's public and what's private. The mechanism differs (exports, events, server.expose()), but the principle remains: hide implementation, expose minimal interface.

Benefits:

  • Reduces cognitive load (focus on one module at a time)
  • Enables parallel development
  • Makes dependencies explicit
  • Supports future extraction if needed

Key insight from Modular Monolith: You can achieve modularity within a single codebase. You don't need microservices to get clear boundaries.

Clean Architecture Layering

All three approaches separate concerns into layers:

Domain Layer:

  • Business entities and rules
  • Repository interfaces
  • No framework dependencies
  • No database knowledge

Application Layer:

  • Use cases / application services
  • Orchestrates domain objects
  • Implements repository interfaces
  • No HTTP/framework knowledge

Infrastructure Layer:

  • Hapi route handlers
  • Bookshelf models
  • Concrete repository implementations
  • Framework configuration

Why this matters: Business logic becomes testable without framework or database. Use cases can be tested with simple mocks. Domain entities are just JavaScript objects.

Testing & Test Design

Pure unit tests:

  • Test use cases with mocked repositories
  • Test domain entities with no dependencies
  • Fast, reliable, no database required
  • Tests guide design (if it's hard to test, design needs work)

Integration tests:

  • Test at module boundaries
  • Use real database
  • Test full request/response cycle
  • Slower, but verify system actually works

Characterization tests: When modernizing existing code, write tests that capture current behavior before refactoring. These provide a safety net during migration.

Incremental Migration

The Boy Scout Rule applies: Leave the code better than you found it.

Practical migration pattern:

  1. Choose a seam - Find a small, isolated feature to migrate first
  2. Write characterization tests - Capture current behavior
  3. Create new module - Implement using chosen approach
  4. Route to new module - Feature flag or routing logic
  5. Validate in production - Run both implementations briefly
  6. Remove old implementation - Delete legacy code
  7. Repeat - Pick next feature

Start small:

  • First module: simple, low-risk feature
  • Build confidence in patterns
  • Refine approach based on learnings
  • Gradually tackle more complex areas

Key insight: Each increment should be weeks, not months. If a migration takes longer, the scope is too large.

Choosing an Approach

How do you decide which approach fits your situation?

Decision Factors

Technical factors:

  • Current hapi app complexity
  • Team size and expertise
  • Domain complexity
  • Performance requirements
  • Deployment constraints

Organizational factors:

  • Timeline and urgency
  • Team's comfort with change
  • Learning appetite
  • Risk tolerance

Decision Matrix

Choose Strangler Fig to New Technology when:

  • Legacy patterns are deeply entrenched
  • Team wants to invest in modern framework
  • Can accept deployment complexity
  • Long-term architectural shift is goal
  • Separate scaling is beneficial

Choose Plugin-Based with Domain Events when:

  • Staying with hapi is preferred
  • Domain is complex with many interactions
  • Team understands DDD concepts
  • Eventual consistency is acceptable
  • Loose coupling is priority

Choose Plugin-Based with Public Interfaces when:

  • Staying with hapi is preferred
  • Domain is relatively straightforward
  • Team prefers simplicity over sophistication
  • Synchronous operations fit the domain
  • Direct contracts are clearer than events

No wrong choice: All three approaches work. Pick based on your context, not abstract "best practices."

Getting Started

Regardless of which approach you choose, follow these steps:

1. Understand Current State

Map the system:

  • What are the major features/domains?
  • Where are the natural boundaries?
  • What dependencies exist between areas?
  • Where is the pain felt most?

Techniques:

  • Event storming with the team
  • Dependency diagrams
  • Pain point discussions

2. Identify a Seam

Good first candidates:

  • New feature being added
  • Bug-prone area needing attention
  • Isolated functionality
  • Small, well-understood domain

Avoid starting with:

  • Core, complex domains
  • Areas with many dependencies
  • Unstable requirements
  • Critical, high-risk features

3. Write Characterization Tests

Before changing anything, write tests that capture current behavior:

  • What does this code actually do?
  • What are the edge cases?
  • What are the side effects?

These tests may pass on wrong behavior—that's fine. They document "what is" before you change to "what should be."

4. Implement One Module

Pick your approach and implement:

  • Create plugin structure
  • Implement layers (domain, application, infrastructure)
  • Define public interface
  • Write unit tests for use cases
  • Write integration tests for boundaries

Keep it small: First module is a learning experience. Don't try to perfect everything.

5. Deploy and Validate

  • Deploy alongside existing code
  • Route traffic to new module
  • Monitor for issues
  • Gather feedback
  • Iterate on approach

6. Refine and Repeat

  • What worked well?
  • What was harder than expected?
  • What would you change?
  • Update patterns based on learnings
  • Pick next module and repeat

Handling Common Challenges

Shared Database

Challenge: Multiple modules need same data.

Solutions:

  • Views: Create database views that hide implementation
  • Shared read models: Separate read paths from write paths
  • Event-driven sync: One module owns writes, publishes events
  • Temporary duplication: Accept data duplication during transition

Key insight: Don't let shared database prevent modularity. Boundaries can exist at code level even with shared database.

Circular Dependencies

Challenge: Module A needs Module B, Module B needs Module A.

Solutions:

  • Examine boundaries: Circular dependencies often indicate wrong boundaries
  • Domain events: Break synchronous cycle with events
  • Shared kernel: Extract truly shared concepts to separate module
  • Redesign: Consider if modules are properly separated

Prevention: Define module dependencies upfront. Use architectural rules (e.g., dependency-cruiser) to enforce.

Testing Legacy Code

Challenge: Can't test without database and full framework.

Solutions:

  • Extract method: Pull logic into pure function
  • Dependency injection: Accept dependencies as parameters
  • Introduce seam: Create abstraction point for testing
  • Characterization tests: Test through framework initially

Key insight from Feathers: Make smallest change to enable testing, then test enables larger refactoring.

Team Onboarding

Challenge: Team doesn't understand new patterns.

Solutions:

  • Pair on first module: Learn together
  • Document decisions: ADRs for key choices
  • Code reviews: Share knowledge
  • Internal talks: Team members teach patterns
  • Living documentation: Update docs as patterns evolve

Avoid: Don't mandate patterns without support. Build understanding through collaboration.

Measuring Progress

How do you know if modernization is working?

Technical Metrics

Testability:

  • Percentage of business logic with unit tests
  • Test execution time decreasing
  • Test failures are clear and actionable

Modularity:

  • Clear public interfaces defined
  • Inter-module dependencies are explicit
  • Can understand module without reading entire codebase

Maintainability:

  • Time to add new feature decreasing
  • Bug fix time decreasing
  • New team members productive faster

Team Metrics

Confidence:

  • Team feels safe making changes
  • Refactoring happens opportunistically
  • Fear of breaking things decreases

Velocity:

  • Feature delivery maintains or increases
  • Technical debt work fits within sprints
  • Modernization doesn't block features

When to Reassess

Red flags:

  • Modernization taking longer each iteration
  • Team confidence decreasing
  • Velocity slowing
  • Complexity increasing

If you see these signs:

  • Stop and reassess approach
  • Consider smaller scope
  • Get external perspective
  • Be willing to adjust strategy

Real-World Patterns

Based on our experience across multiple hapi modernization projects:

Pattern: Coexistence Strategy

Observation: All three approaches require legacy and new code to coexist temporarily.

Strategies that worked:

  • Feature flags control which implementation runs
  • Adapter pattern allows swapping implementations
  • Parallel run compares results temporarily
  • Progressive rollout migrates users gradually

Key lesson: Plan for gradual transition, not big bang cutover.

Pattern: Public Interface Evolution

Observation: Public interfaces need to evolve as understanding deepens.

Strategies that worked:

  • Version interfaces when breaking changes needed
  • Deprecation warnings before removing methods
  • Adapter layer maintains old interface temporarily
  • Documentation of interface contracts

Key lesson: Public interfaces are contracts—handle changes carefully.

Pattern: Testing Strategy Shift

Observation: Test strategy changes as architecture improves.

Evolution we saw:

  • Start: Integration tests through framework
  • Middle: Mix of integration and unit tests
  • End: Primarily unit tests, targeted integration tests

Key lesson: Testing strategy should match architectural maturity.

Pattern: Team Learning Curve

Observation: First module takes longest, subsequent modules accelerate.

Typical timeline:

  • First module: 2-3x expected time (learning, establishing patterns)
  • Second module: 1.5x expected time (refining patterns)
  • Third module: At expected time (patterns established)
  • Subsequent: Faster than expected (patterns are second nature)

Key lesson: Don't judge approach based on first module. Patterns improve with practice.

Key Takeaways

  1. Three proven approaches exist - Strangler Fig, Domain Events, Public Interfaces—all work in production
  2. Choose based on context - No universally "best" approach; pick what fits your situation
  3. Common principles apply - Modularization, clean architecture, incremental migration transcend specific approach
  4. Start small - First module is learning; don't try to perfect everything immediately
  5. Measure progress - Track testability, modularity, team confidence—adjust if metrics decline
  6. Plan for coexistence - Legacy and new code will coexist; design for gradual transition
  7. Team learning matters - First module is slowest; patterns accelerate with practice
  8. Stay pragmatic - Perfect architecture isn't the goal; working system with clear direction is

Further Reading

Foundational concepts:

Implementation guidance:

Books:

  • "Working Effectively with Legacy Code" by Michael Feathers - Essential refactoring techniques
  • "Monolith to Microservices" by Sam Newman - Decomposition patterns (applicable within monoliths too)
  • "Implementing Domain-Driven Design" by Vaughn Vernon - Aggregate roots, domain events, bounded contexts

Articles:

Conclusion

Modernizing legacy hapi applications is not about rewriting from scratch. It's about incremental improvement using proven patterns while continuing to deliver value.

All three approaches work. The "right" choice depends on your team, domain, and constraints. Start small, learn from each module, and let patterns emerge from practice.

The goal isn't perfect architecture—it's a sustainable system where the team can confidently make changes, tests provide safety, and boundaries keep complexity manageable.

Most importantly: Keep shipping. Modernization should enable faster feature delivery, not replace it.