Appearance
Terraform Project Structure
How we organize Terraform repositories around account boundaries, root modules, and deployment lifecycles
Related Concepts: Coupling and Cohesion | Continuous Delivery
Folder Structure
Our layout is loosely based on the Trussworks Terraform layout and codified in our platform infrastructure template. The top-level structure mirrors AWS account boundaries — each directory represents one AWS account in an AWS Organization.
While our examples use AWS accounts, this pattern is universally applicable. The top-level directories represent whatever high-level organizational separation your cloud provider offers: resource groups in Azure, projects in GCP, or organizations/spaces in Cloud Foundry. The principle is the same — environment isolation at the provider's strongest boundary, with root modules nested inside for finer-grained separation of concerns.
.
├── .github/workflows/ # CI pipelines
├── modules/ # Project-specific reusable modules
│ ├── waf/
│ ├── bastion/
│ └── ...
├── org-infra/ # Organization-wide shared infra (DNS, ECR, pipelines)
│ ├── admin-global/
│ ├── build-pipeline/
│ └── bootstrap/
├── dev/ # Dev account
│ ├── admin-global/
│ ├── bootstrap/
│ └── ...
├── prod/ # Prod account
│ ├── admin-global/ # Shared account resources (VPC, CloudTrail, ACM)
│ ├── cluster-infra/ # Shared application infra (ALB, ECS cluster, WAF)
│ │ └── tests/unit/ # Terraform native tests
│ ├── service-backend-prod/ # Per-service root modules
│ ├── email/
│ ├── file-storage/
│ └── bootstrap/
├── test/ # Test helpers (backend overrides)
├── init.sh # Project initialization script
└── README.mdAccounts as Top-Level Directories
Each account directory (dev/, prod/, org-infra/) is fully self-contained. Everything needed to manage that account's infrastructure lives inside it. The init.sh script in the template renames orgname-* directories to match the project.
Root Modules Within Accounts
Each subdirectory within an account is a root module with its own state file:
bootstrap/— creates the S3 bucket and DynamoDB table for remote state. Applied once during account setup, then never touched again. State is committed to the repo.admin-global/— account-wide shared resources: VPC, CloudTrail, ACM certificates, shared IAM roles. This is the stable foundation that other root modules depend on.cluster-infra/— shared application infrastructure: ALB, ECS cluster, WAF, database. Resources that multiple services share but that change more frequently thanadmin-global.service-<name>/— per-service root modules for application-specific resources (ECS service, CodeDeploy, task definitions).- Domain-specific root modules (
email/,file-storage/) — infrastructure for specific concerns that have their own lifecycle.
The modules/ Directory
Project-specific modules that are reused across accounts or root modules live in modules/. If a module is generic enough to be used outside this project, it should be extracted into its own versioned repository (e.g. terraform-aws-alb, terraform-aws-ecs-service).
Root Module Dependency Order
Root modules within an account form a dependency tree connected by terraform_remote_state data sources. The typical deployment order is:
bootstrap → admin-global → cluster-infra → service-backend, service-worker, ... (parallel)bootstrap is a one-time setup. admin-global is the stable foundation. cluster-infra is shared application infra. Services depend on both admin-global and cluster-infra but are independent of each other and can be deployed in parallel.
File Conventions Within a Root Module
prod/cluster-infra/
├── main.tf # Provider config, remote state data sources, locals
├── load_balancer.tf # One file per logical concern
├── waf.tf
├── ecs_cluster.tf
├── database.tf
├── outputs.tf
└── tests/
└── unit/ # Terraform native test filesSplit by logical concern, not by resource type. waf.tf contains the WAF module call, not modules.tf. load_balancer.tf contains the ALB, its security groups, and access logs — not security_groups.tf and s3.tf.
Coupling and Cohesion in Root Module Design
See Coupling and Cohesion for the foundational concepts.
The same forces that shape application architecture — coupling and cohesion — shape how we split and connect root modules. But in Terraform, the cost of coupling between root modules is amplified by the friction of terraform_remote_state data sources. Every cross-root-module dependency requires explicit state backend configuration, creates a temporal coupling (the producer must be applied before the consumer), and makes the dependency invisible at the module level until you read the data blocks.
Be intentional about when to split root modules. Split when there are genuinely different rates of change, different deployment schedules, or different blast radii. Don't split just because two things feel conceptually different — if they always change together and deploy together, they belong together.
Cohesion: Keep Related Infrastructure Together
Things that change together should live together. A root module should represent a cohesive unit of infrastructure that is deployed as one.
hcl
# Good: ALB, WAF, and security groups in one root module.
# They share a lifecycle — you don't deploy a WAF without an ALB.
# prod/cluster-infra/
# load_balancer.tf
# waf.tf
# ecs_cluster.tfhcl
# Bad: splitting the ALB and WAF into separate root modules.
# They always change and deploy together, but now they're coupled
# through remote state for no benefit.
# prod/load-balancer/ ← creates the ALB
# prod/waf/ ← reads ALB ARN via remote state just to attachSigns of low cohesion in a root module: resources that never change when the others do, outputs that exist solely to feed another root module, and terraform apply runs that produce no changes 90% of the time.
Coupling: Minimize Cross-Root-Module Dependencies
Every terraform_remote_state data source is a coupling point. Each one means:
- Change amplification — renaming an output in the producer requires updating every consumer.
- Temporal coupling — the producer must be applied first, and apply order is manually defined.
- Cognitive load — understanding a root module requires tracing dependencies into other state files.
hcl
# This is coupling. Be intentional about whether it's worth it.
data "terraform_remote_state" "admin_global" {
backend = "s3"
config = {
bucket = "myapp-prod-tf-state"
key = "admin-global.tfstate"
region = "us-west-2"
}
}
locals {
vpc_id = data.terraform_remote_state.admin_global.outputs.vpc_id
private_subnets = data.terraform_remote_state.admin_global.outputs.private_subnets
}This coupling is acceptable when admin-global is a stable, rarely-changing foundation (VPC, DNS zones, shared IAM). It becomes problematic when both root modules change frequently and the dependency creates deployment coordination overhead.
When to Split Root Modules
Split when you have genuinely different:
- Rates of change — shared networking changes quarterly; application services change daily.
- Blast radii — a bad apply to a service shouldn't risk the database or VPC.
- Ownership — different teams manage different infrastructure.
- Deployment schedules — some infrastructure deploys on merge, some requires change windows.
When to Keep Things Together
Accept coupling within a cohesive boundary. If two pieces of infrastructure:
- Always deploy together
- Share the same blast radius
- Are owned by the same team
- Reference each other's resources directly
Then they belong in the same root module. The indirection of remote state adds cost without benefit.
Reducing Coupling When You Must Split
When a split is justified, minimize the coupling surface:
- Narrow the outputs — only expose what consumers actually need. Every output is a public contract.
- Use data-only modules — instead of
terraform_remote_state, a data-only module can look up shared infrastructure by tags or naming conventions, decoupling consumers from the producer's state backend. See Data-Only Modules for details. - Stabilize the interface — treat outputs of foundational root modules like a public API. Avoid renaming or removing them without checking consumers.
hcl
# Tightly coupled: consumer knows the producer's state backend config
data "terraform_remote_state" "network" {
backend = "s3"
config = { bucket = "...", key = "admin-global.tfstate", region = "..." }
}
# Loosely coupled: consumer discovers infrastructure by convention
module "network" {
source = "../../modules/network_lookup"
environment = "prod"
}The data-only module approach trades state backend coupling for a tagging/naming convention — which is usually more stable and doesn't require knowing another root module's internal state structure.
Verify Assumptions Against the Application
Before writing infrastructure that references application routes, endpoints, or behavior, verify those assumptions against the actual application code. Check:
- Backend route prefixes and controller paths
- Whether a global prefix (e.g.
setGlobalPrefix) is set - How the frontend constructs API URLs
- What the ALB actually serves (API-only vs mixed traffic)
Do not assume route patterns from variable names or comments — read the code. Infrastructure that doesn't match the application it supports is worse than no infrastructure at all.
Pull Requests
Structure PRs around the domain change, not the technical layers touched. A PR that adds rate limiting should read as "add rate limiting" — not "modify three files in two directories."
- Branch from the commit that introduces the module if iterating on it
- Reference related issues with "Related to" (not "Closes") when the PR partially satisfies a requirement
- Rebase onto latest main before opening the PR