skip to Main Content

Using Terraform, I want to build an infrastructure that consists of an external load balancer (LB) and a MIG with 3 VMs. The LB shall be accessible only from my IP (via ports 80 and 22). Each VM within the MIG should run a server that listens on 8080. Furthermore, I would like to set up health checks for the MIG.
To achieve the goal, I’m using the following Terraform modules: "GoogleCloudPlatform/lb-http/google" and "terraform-google-modules/vm/google//modules/mig”. Unfortunately, after running the terraform apply command, all health checks fail, and the LB is not accessible.

I will put my code in the later part of this post, but first, I would like to arrive at an understanding of the different attributes of the modules I quoted before:

  1. Does the MIG module’s attribute named_ports refer to the port where my servers run? In my case, 8080.
  2. Does the MIG module’s health_check attribute refer to the VMs within the MIG? If yes, then I assume that the port attribute of the health_check attribute should refer to the port where the servers run, again, 8080.
  3. Does the LB module’s backends attribute refer to the VMs within the MIG? Should the default‘s attribute port again point to 8080?
  4. Finally, the LB’s module health_check attribute is the same as the one of the MIG’s, right? Once again, the port specified there should be 8080.
  5. Should the firewall rule for allowing health checks (see below) be applied to the LB or the MIG?

Here’s the main.tf file:

data "external" "my_ip_addr" {
  program = ["/bin/bash", "${path.module}/getip.sh"]
}


resource "google_project_service" "project" {
  // ...
}

resource "google_service_account" "service-acc" {
  // ...
}

resource "google_compute_network" "vpc-network" {
  project = var.pro
  name = var.network_name
  auto_create_subnetworks = false
}

resource "google_compute_subnetwork" "subnetwork" {
  name = "subnetwork"
  ip_cidr_range = "10.0.101.0/24"
  region = var.region
  project = var.pro
  stack_type = "IPV4_ONLY"
  network = google_compute_network.vpc-network.self_link
}

resource "google_compute_firewall" "allow-internal" {
  name    = "allow-internal"
  project = var.pro
  network = google_compute_network.vpc-network.self_link
  allow {
    protocol = "tcp"
    ports = ["80"]
  }
  source_ranges = ["10.0.101.0/24"]
}

resource "google_compute_firewall" "allow-ssh" {
  project = var.pro
  name          = "allow-ssh"
  direction     = "INGRESS"
  network       = google_compute_network.vpc-network.self_link
  allow {
    protocol = "tcp"
    ports = ["22"]
  }
  target_tags   = ["allow-ssh"] 
  source_ranges = [format("%s/%s", data.external.my_ip_addr.result["internet_ip"], 32)]
}

resource "google_compute_address" "static" {
  project = var.pro
  region = var.region
  name = "ipv4-address"
}

resource "google_compute_instance" "ssh-vm" {
  name = "ssh-vm"
  machine_type = "e2-standard-2"
  project = var.pro
  tags = ["allow-ssh"]
  zone = "europe-west1-b"

  boot_disk {
    initialize_params {
      image = "ubuntu-2004-focal-v20221213"
    }
  }

  network_interface {
    subnetwork = google_compute_subnetwork.subnetwork.self_link
    access_config {
      nat_ip = google_compute_address.static.address
    }
  }

  metadata = {
    startup-script = <<-EOF
        #!/bin/bash
        sudo snap install docker
        sudo docker version > file1.txt
        sleep 5
        sudo docker run -d --rm -p ${var.server_port}:${var.server_port} 
        busybox sh -c "while true; do { echo -e 'HTTP/1.1 200 OKrn'; 
        echo 'yo'; } | nc -l -p ${var.server_port}; done"
        EOF
  }

}

module "instance_template" {
  source = "terraform-google-modules/vm/google//modules/instance_template"
  version = "7.9.0"
  region = var.region
  project_id = var.pro
  network = google_compute_network.vpc-network.self_link
  subnetwork = google_compute_subnetwork.subnetwork.self_link
  service_account = {
    email = google_service_account.service-acc.email
    scopes = ["cloud-platform"]
  }

  name_prefix = "webserver"
  tags = ["template-vm", "allow-ssh"]
  machine_type = "e2-standard-2"
  startup_script = <<-EOF
  #!/bin/bash
  sudo snap install docker
  sudo docker version > docker_version.txt
  sleep 5
  sudo docker run -d --rm -p ${var.server_port}:${var.server_port} 
  busybox sh -c "while true; do { echo -e 'HTTP/1.1 200 OKrn'; 
  echo 'yo'; } | nc -l -p ${var.server_port}; done"
  EOF
  source_image = "https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/ubuntu-2004-focal-v20221213"
  disk_size_gb = 10
  disk_type = "pd-balanced"
  preemptible = true

}

module "vm_mig" {
  source  = "terraform-google-modules/vm/google//modules/mig"
  version = "7.9.0"
  project_id = var.pro
  region = var.region
  target_size = 3
  instance_template = module.instance_template.self_link
  // a load balancer sends incoming traffic to the group via the named ports
  // if a req comes to the LB, send it to the port named http on the vms
  named_ports = [{
    name = "http"
    port = 80
  }]
  health_check = {
    type = "http"
    initial_delay_sec = 30
    check_interval_sec = 30
    healthy_threshold = 1
    timeout_sec = 10
    unhealthy_threshold = 5
    response = ""
    proxy_header = "NONE"
    port = 80
    request = ""
    request_path = "/"
    host = ""
  }
  network = google_compute_network.vpc-network.self_link
  subnetwork = google_compute_subnetwork.subnetwork.self_link
}

module "gce-lb-http" {
  source            = "GoogleCloudPlatform/lb-http/google"
  version           = "~> 4.4"
  project           = var.pro
  name              = "group-http-lb"
  // This tag must match the tag from the instance template
  // This will create the default health check firewall rule
  // and apply it to the machines tagged with the "template-vm" tag
  target_tags       = ["template-vm"]
  // the name of the network where the default health check will be created
  firewall_networks = [google_compute_network.vpc-network.name]
  backends = {
    default = {
      description                     = null
      port                            = 80
      protocol                        = "HTTP"
      port_name                       = "http"
      timeout_sec                     = 10
      enable_cdn                      = false
      custom_request_headers          = null
      custom_response_headers         = null
      security_policy                 = null
      connection_draining_timeout_sec = null
      session_affinity                = null
      affinity_cookie_ttl_sec         = null

      health_check = {
        check_interval_sec  = null
        timeout_sec         = null
        healthy_threshold   = null
        unhealthy_threshold = null
        request_path        = "/"
        port                = 80
        host                = null
        logging             = null
      }

      log_config = {
        enable = true
        sample_rate = 1.0
      }

      groups = [
        {
          # Each node pool instance group should be added to the backend.
          group                        = module.vm_mig.instance_group
          balancing_mode               = null
          capacity_scaler              = null
          description                  = null
          max_connections              = null
          max_connections_per_instance = null
          max_connections_per_endpoint = null
          max_rate                     = null
          max_rate_per_instance        = null
          max_rate_per_endpoint        = null
          max_utilization              = null
        },
      ]

      iap_config = {
        enable               = false
        oauth2_client_id     = null
        oauth2_client_secret = null
      }
    }
  }
}


2

Answers


  1. I can see you created firewall rules with tags var.network_name , but dont see you added this tag to your VM/MIG. To make healthcheck firewall work you need to add this tags to your MIG.

    Login or Signup to reply.
  2. Does the MIG module’s attribute named_ports refer to the port where my servers run?

    Sort of. A named port is a key/value pair applied to an instance group, not an instance directly. An example might be { name: "gunicorn", port: 8000 }

    These can be a bit of a pain to work with if the instance groups and named ports are created by different teams or different processes, since one may overwrite the other.

    Does the MIG module’s health_check attribute refer to the VMs within the MIG?

    Yes, the healthcheck will need to hit the VMs directly. This is so the MIG can detect instance failures and launch replacements in order to meet load requirements.

    If yes, then I assume that the port attribute of the health_check attribute should refer to the port where the servers run, again, 8080.

    Yes, it’s the port of the service running on the VM.

    Does the LB module’s backends attribute refer to the VMs within the MIG? Should the default’s attribute port again point to 8080?

    Similar to question #1, the answer is ‘yes, but not directly’. The backend will reference one or more instance group IDs, and the instance groups will of course contain the relevant instances.

    One of the things that makes GCP HTTP(S) load balancers tricky to work with is they have different components under the hood. Breaking this down by traffic flow, it looks like this::

    LB IP Address -> Forwarding Rule -> Target proxy -> URL Map -> Backend Service -> Instance Group -> Instance -> Service

    The backend service will send traffic to the instance group using protocol + named_port. Then the instance group will map that traffic to a specific port number using the named_port.

    Finally, the LB’s module health_check attribute is the same as the one of the MIG’s, right?

    Yes. Just like with the MIG, the healthcheck will need to hit the instances directly, which requires firewall rules be open from source ranges 35.191.0.0/16 + 130.211.0.0/22 to the applicable instances on relevant ports.

    For firewall rules, I personally recommend just allowing this traffic to all instances on ports 1-65535, but your security policy may require it to be more granular. You can use network tags or service accounts to apply the rule only to specific groups of instances.

    Should the firewall rule for allowing health checks (see below) be applied to the LB or the MIG?

    Neither. It should be applied to the instances.

    Unlike AWS, GCP HTTP(S) load balancers don’t use any firewall rules on the frontend. On the backend, they generally use the same IP ranges as the healthcheck. The two exceptions are:

    1. Envoy-based Regional HTTP(S) load balancers which will use the "regional managed proxy" or "proxy only" subnet for the applicable VPC network. This is a /26 or larger that must be manually created.
    2. Internet Network Endpoint Groups (INEGs) which use 34.127.192.0/18 + 34.96.0.0/20
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search