skip to Main Content

I have a Terraform codebase which deploys a private EKS cluster, a bastion host and other AWS services. I have also added a few security groups to the in Terraform. One of the security groups allows inbound traffic from my Home IP to the bastion host so that i can SSH onto that node. This security group is called bastionSG, and that works fine also.

However, initially I am unable to run kubectl from my bastion host, which is the node I use to carry out my kubernetes development on against the EKS cluster nodes. The reason is because my EKS cluster is a private and only allows communication from nodes in the same VPC and i need to add a security group that allows the communication from my bastion host to the cluster control plane which is where my security group bastionSG comes in.

So my routine now is once Terraform deploys everything, I then find the automatic generated EKS security group and add my bastionSG as an inbound rule to it through the AWS Console (UI) as shown in the image below.

enter image description here

I would like to NOT have to do this through the UI, as i am already using Terraform to deploy my entire infrastructure.

I know i can query an existing security group like this

data "aws_security_group" "selectedSG" {
  id = var.security_group_id
}

In this case, lets say selectedSG is the security group creared by EKS once terraform is completed the apply process. I would like to then add an inbound rule of bastionSG to it without it ovewriting the others it’s added automatically.

UPDATE: > EKS NODE GROUP

resource "aws_eks_node_group" "flmd_node_group" {
  cluster_name    = var.cluster_name
  node_group_name = var.node_group_name
  node_role_arn   = var.node_pool_role_arn
  subnet_ids      = [var.flmd_private_subnet_id]
  instance_types = ["t2.small"]

  scaling_config {
    desired_size = 3
    max_size     = 3
    min_size     = 3
  }

  update_config {
    max_unavailable = 1
  }

  remote_access {
    ec2_ssh_key = "MyPemFile"
    source_security_group_ids = [
      var.allow_tls_id,
      var.allow_http_id, 
      var.allow_ssh_id,
      var.bastionSG_id
     ]
  }

  tags = {
    "Name" = "flmd-eks-node"
  }
}

As shown above, the EKS node group has the bastionSG security group in it. which i expect to allow the connection from my bastion host to the EKS control plane.

EKS Cluster

resource "aws_eks_cluster" "flmd_cluster" {
  name     = var.cluster_name
  role_arn = var.role_arn

  vpc_config {
    subnet_ids =[var.flmd_private_subnet_id, var.flmd_public_subnet_id, var.flmd_public_subnet_2_id]
    endpoint_private_access = true
    endpoint_public_access = false
    security_group_ids = [ var.bastionSG_id]
  }
}

bastionSG_id is an output of the security group created below which is passed into the code above as a variable.

BastionSG security group

resource "aws_security_group" "bastionSG" {
  name        = "Home to bastion"
  description = "Allow SSH - Home to Bastion"
  vpc_id      = var.vpc_id

  ingress {
    description      = "Home to bastion"
    from_port        = 22
    to_port          = 22
    protocol         = "tcp"
    cidr_blocks      = [<MY HOME IP address>]
  }

  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  tags = {
    Name = "Home to bastion"
  }
}

2

Answers


  1. Chosen as BEST ANSWER

    There was a simpler solution.

    Query AWS using terraform data attribute, get the id of the security group then use that to create security_group_rule in terraform with the inbound rule that is required.


  2. Let’s start with creating first of all a public security group.

    ################################################################################
    # Create the Security Group
    ################################################################################
    resource "aws_security_group" "public" {
      vpc_id      = local.vpc_id
      name        = format("${var.name}-${var.public_security_group_suffix}-SG")
      description = format("${var.name}-${var.public_security_group_suffix}-SG")
      dynamic "ingress" {
        for_each = var.public_security_group_ingress
        content {
          cidr_blocks      = lookup(ingress.value, "cidr_blocks", [])
          ipv6_cidr_blocks = lookup(ingress.value, "ipv6_cidr_blocks", [])
          from_port        = lookup(ingress.value, "from_port", 0)
          to_port          = lookup(ingress.value, "to_port", 0)
          protocol         = lookup(ingress.value, "protocol", "-1")
        }
      }
      dynamic "egress" {
        for_each = var.public_security_group_egress
        content {
          cidr_blocks      = lookup(egress.value, "cidr_blocks", [])
          ipv6_cidr_blocks = lookup(egress.value, "ipv6_cidr_blocks", [])
          from_port        = lookup(egress.value, "from_port", 0)
          to_port          = lookup(egress.value, "to_port", 0)
          protocol         = lookup(egress.value, "protocol", "-1")
        }
      }
      tags = merge(
        {
          "Name" = format(
            "${var.name}-${var.public_security_group_suffix}-SG",
          )
        },
        var.tags,
      )
    }
    

    Now creating a private security group, making inbound from the public security group, and outbound to the elasticache and rds security group.

    resource "aws_security_group" "private" {
      vpc_id      = local.vpc_id
      name        = format("${var.name}-${var.private_security_group_suffix}-SG")
      description = format("${var.name}-${var.private_security_group_suffix}-SG")
    
      ingress {
        security_groups = [aws_security_group.public.id]
        from_port       = 0
        to_port         = 0
        protocol        = "-1"
      }
      dynamic "ingress" {
        for_each = var.private_security_group_ingress
        content {
          cidr_blocks      = lookup(ingress.value, "cidr_blocks", [])
          ipv6_cidr_blocks = lookup(ingress.value, "ipv6_cidr_blocks", [])
          from_port        = lookup(ingress.value, "from_port", 0)
          to_port          = lookup(ingress.value, "to_port", 0)
          protocol         = lookup(ingress.value, "protocol", "-1")
        }
      }
      dynamic "egress" {
        for_each = var.private_security_group_egress
        content {
          cidr_blocks      = lookup(egress.value, "cidr_blocks", [])
          ipv6_cidr_blocks = lookup(egress.value, "ipv6_cidr_blocks", [])
          from_port        = lookup(egress.value, "from_port", 0)
          to_port          = lookup(egress.value, "to_port", 0)
          protocol         = lookup(egress.value, "protocol", "-1")
        }
      }
      egress {
        security_groups = [aws_security_group.elsaticache_private.id] # it communciates via network interfaces
        from_port       = 6379                                        # redis port
        to_port         = 6379
        protocol        = "tcp"
      }
      egress {
        security_groups = [aws_security_group.rds_mysql_private.id]
        from_port       = 3306
        to_port         = 3306
        protocol        = "tcp"
      }
      tags = merge(
        {
          "Name" = format(
            "${var.name}-${var.private_security_group_suffix}-SG"
          )
        },
        var.tags,
      )
      depends_on = [aws_security_group.elsaticache_private, aws_security_group.rds_mysql_private]
    }
    

    Creating just an egress rule in elasticache security group, and adding one more rule for ingress from the private security group as it resolves the dependency. The same goes for the RDS Security group.

    resource "aws_security_group" "elsaticache_private" {
      vpc_id      = local.vpc_id
      name        = format("${var.name}-${var.private_security_group_suffix}-elasticache-SG")
      description = format("${var.name}-${var.private_security_group_suffix}-elasticache-SG")
      egress {
        cidr_blocks      = ["0.0.0.0/0"]
        ipv6_cidr_blocks = ["::/0"]
        from_port        = 0
        to_port          = 0
        protocol         = "-1"
      }
      tags = merge(
        {
          "Name" = format(
            "${var.name}-${var.public_security_group_suffix}-elasticache-SG",
          )
        },
        var.tags,
      )
    }
    
    resource "aws_security_group_rule" "elsaticache_private_rule" {
      type                     = "ingress"
      from_port                = 6379 # redis port
      to_port                  = 6379
      protocol                 = "tcp"
      source_security_group_id = aws_security_group.private.id
      security_group_id        = aws_security_group.elsaticache_private.id
      depends_on               = [aws_security_group.private]
    }
    
    resource "aws_security_group" "rds_mysql_private" {
      vpc_id      = local.vpc_id
      name        = format("${var.name}-${var.private_security_group_suffix}-rds-mysql-SG")
      description = format("${var.name}-${var.private_security_group_suffix}-rds-mysql-SG")
      egress {
        cidr_blocks      = ["0.0.0.0/0"]
        ipv6_cidr_blocks = ["::/0"]
        from_port        = 0
        to_port          = 0
        protocol         = "-1"
      }
      tags = merge(
        {
          "Name" = format(
            "${var.name}-${var.public_security_group_suffix}-rds-mysql-SG",
          )
        },
        var.tags,
      )
    }
    
    resource "aws_security_group_rule" "rds_mysql_private_rule" {
      type                     = "ingress"
      from_port                = 3306 # mysql / aurora port
      to_port                  = 3306
      protocol                 = "tcp"
      source_security_group_id = aws_security_group.private.id
      security_group_id        = aws_security_group.rds_mysql_private.id
      depends_on               = [aws_security_group.private]
    }
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search