I have started working with terraform recently, I just want to understand the directory structure for design where we have common/different components as below and need to be integrated with ADO. I want to define one parameter for environment in ADO pipeline so that the respective environment will be provisioned. Currently in the below directory structure I am required to select nonprod and other environment like dev,uat.
We have 5 overall environments dev, sit, uat, nft, prod where its organized as nonprod, nft, prod which is common with underlying infra but differentiated with RDS.
Nonprod have VPC, Subnets(private, public, databases), eks cluster, s3 in common but need to provision different databases for dev, sit, uat.
NFT have VPC, Subnets(private, public, databases), eks cluster, s3, RDS.
PROD have VPC, Subnets(private, public, databases), eks cluster, s3, RDS.
Overall i want to choose parameters from ADO which should implement as per environment. I have tried below options.
common state file for nonprod environments overall infra-provisioning and seperate one for RDS components.
I have structured like below two options but want to optimize and make it as single environment parameter. if its possible.
The below thought of choosing the environment variables as env: nonprod and application: dev
├── infrastructure
│ ├── environments
│ │ ├── prod
│ │ ├── nft
│ │ ├── nonprod
│ │ │ ├── infrastructure
│ │ │ ├── main.tf
│ │ │ ├── variables.tf
│ │ │ ├── backend.tf (common for nonprod)
│ │ │ ├── db
│ │ │ ├── uat
│ │ │ ├── sit
│ │ │ │ ├── main.tf
│ │ │ │ ├── backend.tf
│ │ │ │ └── variables.tf
│ │ │ ├── dev
│ │ │ │ ├── main.tf
│ │ │ │ ├── backend.tf
│ │ │ │ └── variables.tf
│ └── modules
│ ├── vpc
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── db
│ └── queue
Second option is to have different folder for each env, with all required terraform files, for all dev, uat, sit.
├── infrastructure
│ ├── environments
│ │ ├── prod
│ │ ├── nft
│ │ ├── uat
│ │ ├── sit
│ │ │ ├── infrastructure
│ │ │ ├── main.tf
│ │ │ ├── variables.tf
│ │ │ ├── backend.tf (common for nonprod)
│ │ │ ├── db
│ │ │ │ ├── main.tf
│ │ │ │ ├── backend.tf
│ │ │ │ └── variables.tf
│ │ ├── dev
│ │ │ ├── infrastructure
│ │ │ ├── main.tf
│ │ │ ├── variables.tf
│ │ │ ├── backend.tf (common for nonprod)
│ │ │ ├── db
│ │ │ │ ├── main.tf
│ │ │ │ ├── backend.tf
│ │ │ │ └── variables.tf
│ └── modules
│ ├── vpc
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── db
│ └── queue
can anyone please help with better possible options or better to have independent pipelines.
Thanks in advance
2
Answers
I have extracted common modules for non-Prod (VPC, EKS cluster, Subnets) and environment specific modules (Database, EKS node-groups) and maintained two separated folders and passed as parameters in Azure DevOps Pipeline.
when we choose Infrastructure monitoring if it is dev or uat or sit, it executes commmon .tf files in non-prod folder and when we choose environment provision, it executes .tf files in specific environment folder.
In my main azurepipelines.yml file, once environment and action is choosen, variables will be sent to respective templates.
Recently AWS published Best practices for using the Terraform AWS Provider and there is a chapter about code base structure and organization – take a look it might help to organize your code structure.
Also consider best practices provided by Hashicorp: about repository structure and about multiple environments.