Terraform project structure with reusable modules


Terraform infrastructure-as-code


Table of contents:

Howdy! It’s been awhile since I wrote here, time to shake off the dust from a pen and put something useful! 😉

In this article I’d like to share my thoughts on building terraform project in a way so that it fits the following:

⚠️ This article is very much abstracted from a particular use case (i.e. it does not necessarily need to be targeting a specific cloud provider or use a specific terraform provider), its idea can be applied to any use case. We are looking at the following structure:

 1.
 2├── README.md
 3├── deployments
 4│   ├── dev
 5│   │   ├── README.md
 6│   │   ├── backend.tf
 7│   │   ├── main.tf
 8│   │   ├── outputs.tf
 9│   │   ├── variables.tf
10│   │   └── vars
11│   │       └── dev.tfvars
12│   └── prod
13└── modules
14    └── keypair
15        ├── README.md
16        ├── main.tf
17        ├── outputs.tf
18        ├── provider.tf
19        └── variables.tf

Modules #

To begin, let’s look into the concept of modules in Terraform. Basically module is a directory that contains terraform files that are provisioning something meaningful (complete set of resources). It’s up to me to define what module should include. It can be a single resource or multiple resources grouped together. The best way is to think about all possible use cases where a single resource can be used (if at all). Typically standard module structure would be something like this:

1modules
2└── keypair
3    ├── README.md
4    ├── main.tf
5    ├── outputs.tf
6    ├── provider.tf
7    └── variables.tf

As Terraform best practices suggests I keep the following bare minimum that belongs to my single module:

The content of above files can be found here.

In this case I use keypair as a complete module which expects two variables as input keypair_name and ssh_key_file.

1resource "openstack_compute_keypair_v2" "keypair" {
2  name       = var.keypair_name
3  public_key = file("${var.ssh_key_file}")
4}

Nothing stops me to deploy this resource doing the following (for simplicity, I am passing values for variables directly via CLI, there is an alternative to use .tfvars files or set environment variables with TF_VAR_ prefix). The last option would require me to set TF_VAR_mykeypair and TF_VAR_ssh_key_file, but hey, hold your horses 🐎, I am going to use this approach a bit later while setting up GitLab CI.

1cd modules/keypair
2
3terraform init
4tf plan -var keypair_name=mykeypair -var ssh_key_file=~/.ssh/id_rsa.pub
5tf apply --auto-approve -var keypair_name=mykeypair -var ssh_key_file=~/.ssh/id_rsa.pub
6tf destroy --auto-approve -var keypair_name=mykeypair -var ssh_key_file=~/.ssh/id_rsa.pub

Executing modules #

Now what if we have another module that needs to be deployed? Or what if we need to make references between modules (i.e. output from first module is required as input for the second module)? We can combine multiple modules and make a reference like this:

1module "dev_keypair" {
2  source       = "../../modules/keypair"
3  ssh_key_file = var.ssh_key_file
4  keypair_name = var.keypair_name
5}

So basically the idea is to execute provisioning of instances (by instance I mean a specific realisation of any infrastructure reusing modules):

1deployments
2└── dev
3    ├── README.md
4    ├── backend.tf
5    ├── main.tf
6    ├── outputs.tf
7    ├── variables.tf
8    └── vars
9        └── dev.tfvars

The content of above files can be found here. And to execute this we would just use the same approach as earlier:

1cd deployments/dev
2
3terraform init
4tf plan -var-file="vars/dev.tfvars" -var ssh_key_file=~/.ssh/id_rsa.pub
5tf apply --auto-approve -var-file="vars/dev.tfvars" -var ssh_key_file=~/.ssh/id_rsa.pub
6tf destroy --auto-approve -var-file="vars/dev.tfvars" -var ssh_key_file=~/.ssh/id_rsa.pub

Since there is dev.tfvars, I am passing variable with -var-file; ssh_key_file is taken as variable from CLI.

GitLab CI #

Now let’s reuse terraform template recipes to run this in GitLab CI. The following pipeline lives here.

 1stages:
 2  - prepare
 3  - validate
 4  - test
 5  - build
 6  - deploy
 7  - cleanup
 8
 9include:
10  - template: Terraform/Base.latest.gitlab-ci.yml
11  - template: Jobs/SAST-IaC.latest.gitlab-ci.yml
12
13variables:
14  # x prevents TF_STATE_NAME from beeing empty for non environment jobs like validate
15  # wait for https://gitlab.com/groups/gitlab-org/-/epics/7437 to use variable defaults
16  TF_STATE_NAME: dev
17  TF_STATE: ${TF_STATE_NAME}
18  TF_CLI_ARGS_plan: "-var-file=vars/${TF_STATE_NAME}.tfvars"
19  TF_ROOT: ${CI_PROJECT_DIR}/deployments/dev
20
21fmt:
22  extends: .terraform:fmt
23validate:
24  extends: .terraform:validate
25
26plan dev:
27  extends: .terraform:build
28  environment:
29    name: $TF_STATE_NAME
30
31apply dev:
32  extends: .terraform:deploy
33  environment:
34    name: $TF_STATE_NAME
35
36destroy:
37  extends: .terraform:destroy
38  environment:
39    name: $TF_STATE_NAME
40  variables:
41      TF_CLI_ARGS_destroy: $TF_CLI_ARGS_plan

There are few things to keep in mind:

To execute another instance I just need to add directory under deployments in a same way as for dev and adjust variables via .tfvars file, introduce additional stages/job pointing to another deployment.

Resources #

comments powered by Disqus