I have a Terraform codebase which deploys a private EKS cluster, a bastion host and other AWS services. I have also added a few security groups to the in Terraform. One of the security groups allows inbound traffic from my Home IP to the bastion host so that i can SSH onto that node. This security group is called bastionSG
, and that works fine also.
However, initially I am unable to run kubectl from my bastion host, which is the node I use to carry out my kubernetes development on against the EKS cluster nodes. The reason is because my EKS cluster is a private and only allows communication from nodes in the same VPC and i need to add a security group that allows the communication from my bastion host to the cluster control plane
which is where my security group bastionSG comes in.
So my routine now is once Terraform deploys everything, I then find the automatic generated EKS security group and add my bastionSG
as an inbound rule to it through the AWS Console (UI) as shown in the image below.
I would like to NOT have to do this through the UI, as i am already using Terraform to deploy my entire infrastructure.
I know i can query an existing security group like this
data "aws_security_group" "selectedSG" {
id = var.security_group_id
}
In this case, lets say selectedSG
is the security group creared by EKS once terraform is completed the apply process. I would like to then add an inbound rule of bastionSG
to it without it ovewriting the others it’s added automatically.
UPDATE: > EKS NODE GROUP
resource "aws_eks_node_group" "flmd_node_group" {
cluster_name = var.cluster_name
node_group_name = var.node_group_name
node_role_arn = var.node_pool_role_arn
subnet_ids = [var.flmd_private_subnet_id]
instance_types = ["t2.small"]
scaling_config {
desired_size = 3
max_size = 3
min_size = 3
}
update_config {
max_unavailable = 1
}
remote_access {
ec2_ssh_key = "MyPemFile"
source_security_group_ids = [
var.allow_tls_id,
var.allow_http_id,
var.allow_ssh_id,
var.bastionSG_id
]
}
tags = {
"Name" = "flmd-eks-node"
}
}
As shown above, the EKS node group has the bastionSG security group in it. which i expect to allow the connection from my bastion host to the EKS control plane.
EKS Cluster
resource "aws_eks_cluster" "flmd_cluster" {
name = var.cluster_name
role_arn = var.role_arn
vpc_config {
subnet_ids =[var.flmd_private_subnet_id, var.flmd_public_subnet_id, var.flmd_public_subnet_2_id]
endpoint_private_access = true
endpoint_public_access = false
security_group_ids = [ var.bastionSG_id]
}
}
bastionSG_id
is an output of the security group created below which is passed into the code above as a variable.
BastionSG security group
resource "aws_security_group" "bastionSG" {
name = "Home to bastion"
description = "Allow SSH - Home to Bastion"
vpc_id = var.vpc_id
ingress {
description = "Home to bastion"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [<MY HOME IP address>]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Name = "Home to bastion"
}
}
2
Answers
There was a simpler solution.
Query AWS using terraform data attribute, get the id of the security group then use that to create security_group_rule in terraform with the inbound rule that is required.
Let’s start with creating first of all a public security group.
Now creating a private security group, making inbound from the public security group, and outbound to the elasticache and rds security group.
Creating just an egress rule in elasticache security group, and adding one more rule for ingress from the private security group as it resolves the dependency. The same goes for the RDS Security group.