Skip to content

Terraform Module Philosophy

When to write a module, what makes most public modules bad, and how we evolve shared abstractions

Related Concepts: Coupling and Cohesion | Module Decomposition | Implements: Terraform Module Design

The Problem with Public Modules

Spend any time browsing the Terraform Registry and a pattern emerges quickly: most public modules are not good abstractions. They tend to fall into two failure modes, and understanding why they fail is the clearest way to understand what we're aiming for instead.

The first is the swiss-army knife module — a single module that tries to solve every possible use case through a sprawling interface of feature flags, conditional resources, and deeply nested variable objects. These modules look impressive at first glance, but the moment you need to debug a plan diff or understand why a resource was created with a particular configuration, you're spelunking through hundreds of lines of dynamic blocks and ternary expressions that exist to serve use cases you'll never encounter. This is the infrastructure equivalent of a god module: it does everything, encodes no opinions, and forces callers to understand its full internal complexity just to use it safely.

The second failure mode is the thin wrapper — a module that accepts every possible input for every resource it contains and passes them straight through, adding a layer of indirection without adding any value. You end up with a variable list that mirrors the resource's argument reference, which means the caller is doing all the same work they'd do without the module, just with an extra layer of abstraction to trace through when something goes wrong.

Neither of these is a real abstraction. A good module makes decisions so its callers don't have to. It codifies an opinion about how a piece of infrastructure should be configured — security posture, naming conventions, observability defaults, tagging strategy — and exposes only the inputs that genuinely vary between callers. If a module's variable list looks like the resource's argument reference, it isn't earning its keep.

When to Write a Module

The instinct to extract a module early is understandable — nobody wants to copy-paste infrastructure code — but premature abstraction in Terraform is more costly than it first appears. A module written too early is a module written before you understand what actually varies between callers and what should be hardcoded. That incomplete understanding is exactly how thin wrappers and swiss-army knives get created: the author doesn't yet know which inputs are real requirements and which are speculative, so they expose everything just in case.

The widely-held rule of thumb — articulated by Martin Fowler, Kent Beck, and others — is that after the second time you repeat something for the same reason, it's time to write an abstraction. We follow this principle closely: a module gets written when we know it's highly likely something will be reused more than twice, not before. The first time you provision an ECS service, you write it inline in the root module. The second time, you notice the duplication but resist the urge — you don't yet have enough information to know what the right abstraction looks like. The third time, the pattern is clear. You can see what genuinely varies between callers and what should be baked in as a default, and you extract a module with confidence that it represents real, observed needs rather than imagined ones.

Forking, Evolving, and Upstreaming

Our shared Terraform modules are designed to work out of the box on most projects, encoding our best current thinking about how a given piece of infrastructure should be configured. But infrastructure lives in the real world, and real projects sometimes need environment-specific configuration that the shared module doesn't account for. When that happens, forking the module into the project's modules/ directory is the pragmatic choice — and it's a perfectly acceptable one. The fork should be minimal and deliberate: change only what the project genuinely requires, not what might hypothetically be useful someday.

The important discipline is what happens next. If we find the same fork appearing across multiple projects — the same additional input, the same adjusted default, the same structural change — that's a signal that our collective opinion about how that unit of infrastructure should be configured has evolved. At that point, the right response isn't to let each project continue diverging quietly. It's to open a pull request against the upstream shared module, incorporate the pattern that practice has validated, and release a new major version so every project benefits from the evolved thinking. This feedback loop — use as-is, fork when genuinely necessary, upstream when patterns emerge across projects — is how our modules stay opinionated and useful without becoming rigid or stale. The modules get better because real projects pressure-test them, and the projects get better because the modules absorb lessons learned across the organization.