skip to Main Content

I am trying to setup an AWS ECS Service via Terraform.

The ECS instance (FARGATE) runs an openldap docker container.

I have got everything working now, the ECS cluster / service / tasks are created properly.

I expected that after doing a

terraform apply 

and after everything is up and running, the next terraform apply call would simply do nothing because nothing has changed.

But doing a terraform apply again, Terraform wants to destroy the service (and its connected task) and then wants to create a new service with a new task.

Why does it do that?

These are the Terraform definitions:

resource "aws_ecs_service" "openldap" {
  name            = "openldap"
  cluster         = aws_ecs_cluster.ecs_cluster_ldap.id
  task_definition = aws_ecs_task_definition.openldap.arn
  desired_count   = 1

  network_configuration {
    subnets = [aws_subnet.sn-public-access.id]
    security_groups = [aws_security_group.secgrp-public-access.id]
    assign_public_ip = true 
  }
}

resource "aws_ecs_task_definition" "openldap" {
  family = "openldap-task-definition"
  requires_compatibilities = ["FARGATE"]
  network_mode             = "awsvpc"
  cpu       = 512
  memory    = 1024
  execution_role_arn    = aws_iam_role.ecs_task_execution_role.arn

  container_definitions = jsonencode([
    {
      name      = "openldap"
      image     = "public.ecr.aws/bitnami/openldap:2.6.7"
      cpu       = 512
      memory    = 1024
      essential = true
      portMappings = [
        {
          containerPort = 1389
          hostPort      = 1389
        },
        {
          containerPort = 1636
          hostPort      = 1636
        }
      ]
      environment = [
        { name = "LDAP_PORT_NUMBER", value = "1389" },
        { name = "LDAP_ROOT", value = "dc=example,dc=com"},
        { name = "LDAP_ADMIN_USERNAME", value = "admin" },
        { name = "LDAP_ADMIN_PASSWORD_FILE", value = "/openldap/ldap_admin_password.txt" },
        { name = "LDAP_CUSTOM_LDIF_DIR", value = "/openldap/ldif" },
        { name = "BITNAMI_DEBUG", value = "yes" },
        { name = "LDAP_TLS_CERT_FILE", value = "/openldap/certs/ldapserver-cert.pem" },
        { name = "LDAP_TLS_KEY_FILE", value = "/openldap/certs/ldapserver-key.pem" },
        { name = "LDAP_TLS_CA_FILE" , value = "/openldap/certs/ldapserver-cacerts.pem" },
        { name = "LDAP_ENABLE_TLS", value = "yes" },
        { name = "LDAP_ENABLE_RQUIRE_TLS", value = "no" },
        { name = "LDAP_LDAPS_PORT_NUMBER", value = "1636" }
      ]      
      logConfiguration = {
        logDriver = "awslogs"
        options = {
          awslogs-group = "cwlg-ecs-ldap"
          awslogs-stream-prefix = "openldap"
          awslogs-region = "${var.region}"
        }
      }
      mountPoints = [
        {
          "sourceVolume": "openldap",
          "containerPath": "/openldap"
        }
      ]
    }
  ])

 volume {
    name = "openldap"

    efs_volume_configuration {
      file_system_id          = aws_efs_file_system.sandbox-efs.id
      root_directory          = "ldap/"
      transit_encryption      = "ENABLED"
      transit_encryption_port = 2049
    }
  }

  runtime_platform {
    operating_system_family = "LINUX"
    cpu_architecture        = "X86_64"
  }
}

And this is the output of second terraform apply call:

Terraform will perform the following actions:

  # aws_ecs_service.openldap must be replaced
-/+ resource "aws_ecs_service" "openldap" {
      - health_check_grace_period_seconds  = 0 -> null
      ~ iam_role                           = "/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS" -> (known after apply)
      ~ id                                 = "arn:aws:ecs:eu-central-1:xxxxxxxxxxxx:service/ecs_cluster_ldap/openldap" -> (known after apply)
      + launch_type                        = (known after apply)
        name                               = "openldap"
      ~ platform_version                   = "LATEST" -> (known after apply)
      - propagate_tags                     = "NONE" -> null
      - tags                               = {} -> null
      ~ triggers                           = {} -> (known after apply)
        # (10 unchanged attributes hidden)

      - capacity_provider_strategy { # forces replacement
          - base              = 1 -> null
          - capacity_provider = "FARGATE" -> null
          - weight            = 100 -> null
        }

      - deployment_circuit_breaker {
          - enable   = false -> null
          - rollback = false -> null
        }

      - deployment_controller {
          - type = "ECS" -> null
        }

        # (1 unchanged block hidden)
    }

Plan: 1 to add, 0 to change, 1 to destroy.

2

Answers


  1. Chosen as BEST ANSWER

    Now what I have is this:

    Do I need the default_capacity_provider_strategy after all?

    resource "aws_ecs_cluster_capacity_providers" "capacity_providers" {
      cluster_name = aws_ecs_cluster.ecs_cluster_ldap.name
    
      capacity_providers = ["FARGATE"]
    
      default_capacity_provider_strategy {
        base              = 1
        weight            = 100
        capacity_provider = "FARGATE"
      }
    }
    
    resource "aws_ecs_task_definition" "openldap" {
      family = "openldap-task-definition"
      requires_compatibilities = ["FARGATE"]
      network_mode             = "awsvpc"
      cpu       = 256
      memory    = 512
      execution_role_arn    = aws_iam_role.ecs_task_execution_role.arn
    
      container_definitions = jsonencode([
        {
          name      = "openldap"
          image     = "public.ecr.aws/bitnami/openldap:2.6.7"
          essential = true
          portMappings = [
            {
              containerPort = 1389
              hostPort      = 1389
            },
            {
              containerPort = 1636
              hostPort      = 1636
            }
          ]
          environment = [
            { name = "LDAP_PORT_NUMBER", value = "1389" },
            { name = "LDAP_ROOT", value = "dc=example,dc=com"},
            { name = "LDAP_ADMIN_USERNAME", value = "admin" },
            { name = "LDAP_ADMIN_PASSWORD_FILE", value = "/openldap/ldap_admin_password.txt" },
            { name = "LDAP_CUSTOM_LDIF_DIR", value = "/openldap/ldif" },
            { name = "BITNAMI_DEBUG", value = "yes" },
            { name = "LDAP_TLS_CERT_FILE", value = "/openldap/certs/ldapserver-cert.pem" },
            { name = "LDAP_TLS_KEY_FILE", value = "/openldap/certs/ldapserver-key.pem" },
            { name = "LDAP_TLS_CA_FILE" , value = "/openldap/certs/ldapserver-cacerts.pem" },
            { name = "LDAP_ENABLE_TLS", value = "yes" },
            { name = "LDAP_ENABLE_REQUIRE_TLS", value = "no" },
            { name = "LDAP_LDAPS_PORT_NUMBER", value = "1636" }
          ]      
          logConfiguration = {
            logDriver = "awslogs"
            options = {
              awslogs-group = "cwlg-ecs-ldap"
              awslogs-stream-prefix = "openldap"
              awslogs-region = "${var.region}"
            }
          }
          mountPoints = [
            {
              "sourceVolume": "openldap",
              "containerPath": "/openldap"
            }
          ]
        }
      ])
    
     volume {
        name = "openldap"
    
        efs_volume_configuration {
          file_system_id          = aws_efs_file_system.sandbox-efs.id
          root_directory          = "ldap/"
          transit_encryption      = "ENABLED"
          transit_encryption_port = 2049
        }
      }
    
      runtime_platform {
        operating_system_family = "LINUX"
        cpu_architecture        = "X86_64"
      }
    
      depends_on = [
        aws_efs_file_system.sandbox-efs,
        time_sleep.wait_for_ec2-sandbox-linux-01
      ]
    }
    
    resource "aws_ecs_service" "openldap" {
      name            = "openldap"
      cluster         = aws_ecs_cluster.ecs_cluster_ldap.id
      task_definition = aws_ecs_task_definition.openldap.arn
      desired_count   = 1
    
      launch_type = "FARGATE"
    
      network_configuration {
        subnets = [aws_subnet.sn-public-access.id]
        security_groups = [aws_security_group.secgrp-public-access.id]
        assign_public_ip = true 
      }
    }
    
    resource "aws_iam_role" "ecs_task_execution_role" {
      name = "role-name"
     
      assume_role_policy = <<EOF
    {
     "Version": "2012-10-17",
     "Statement": [
       {
         "Action": "sts:AssumeRole",
         "Principal": {
           "Service": "ecs-tasks.amazonaws.com"
         },
         "Effect": "Allow",
         "Sid": ""
       }
     ]
    }
    EOF
    }
    
    resource "aws_iam_role_policy_attachment" "ecs-task-execution-role-policy-attachment" {
      role       = aws_iam_role.ecs_task_execution_role.name
      policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
    }
    

  2. The Terraform Apply output is useful for debugging why a resource will be recreated. Look out for any lines with the comment # forces replacement. This is Terraform telling you which attribute is out of sync and requires the resource to be recreated in order to bring it in sync.

    In your case, it is the capacity_provider_strategy. Whilst you do not specify it in your definition, under the hood many resources assume default values.

    Your aws_ecs_service resource definition should ideally either specify the launch_type or capacity_provider_strategy arguments. This instructs AWS on the appropriate manner to find capacity for your service.

    You specify neither, so I suspect AWS defaults to using Fargate, as reported in your second Terraform apply step:

          - capacity_provider_strategy { # forces replacement
              - base              = 1 -> null
              - capacity_provider = "FARGATE" -> null
              - weight            = 100 -> null
            }
    

    Given you want to use Fargate, choose one of the two approaches: launch_type or capacity_provider_strategy. The former is simpler if you don’t want to use a mix of deployment targets.

    Therefore, try adding the following to your ECS service definition: launch_type = "FARGATE"

    resource "aws_ecs_service" "openldap" {
      name            = "openldap"
      cluster         = aws_ecs_cluster.ecs_cluster_ldap.id
      task_definition = aws_ecs_task_definition.openldap.arn
      desired_count   = 1
    
      launch_type = "FARGATE"
    
      network_configuration {
        subnets = [aws_subnet.sn-public-access.id]
        security_groups = [aws_security_group.secgrp-public-access.id]
        assign_public_ip = true 
      }
    }
    

    The first run of Terraform Apply will likely recreate the service, but subsequent runs will not.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search